Archive for the ‘Alphazero’ Category

A general reinforcement learning algorithm that masters …

One program to rule them all

Computers can beat humans at increasingly complex games, including chess and Go. However, these programs are typically constructed for a particular game, exploiting its properties, such as the symmetries of the board on which it is played. Silver et al. developed a program called AlphaZero, which taught itself to play Go, chess, and shogi (a Japanese version of chess) (see the Editorial, and the Perspective by Campbell). AlphaZero managed to beat state-of-the-art programs specializing in these three games. The ability of AlphaZero to adapt to various game rules is a notable step toward achieving a general game-playing system.

Science, this issue p. 1140; see also pp. 1087 and 1118

The game of chess is the longest-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. By contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go by reinforcement learning from self-play. In this paper, we generalize this approach into a single AlphaZero algorithm that can achieve superhuman performance in many challenging games. Starting from random play and given no domain knowledge except the game rules, AlphaZero convincingly defeated a world champion program in the games of chess and shogi (Japanese chess), as well as Go.

The study of computer chess is as old as computer science itself. Charles Babbage, Alan Turing, Claude Shannon, and John von Neumann devised hardware, algorithms, and theory to analyze and play the game of chess. Chess subsequently became a grand challenge task for a generation of artificial intelligence researchers, culminating in high-performance computer chess programs that play at a superhuman level (1, 2). However, these systems are highly tuned to their domain and cannot be generalized to other games without substantial human effort, whereas general game-playing systems (3, 4) remain comparatively weak.

A long-standing ambition of artificial intelligence has been to create programs that can instead learn for themselves from first principles (5, 6). Recently, the AlphaGo Zero algorithm achieved superhuman performance in the game of Go by representing Go knowledge with the use of deep convolutional neural networks (7, 8), trained solely by reinforcement learning from games of self-play (9). In this paper, we introduce AlphaZero, a more generic version of the AlphaGo Zero algorithm that accommodates, without special casing, a broader class of game rules. We apply AlphaZero to the games of chess and shogi, as well as Go, by using the same algorithm and network architecture for all three games. Our results demonstrate that a general-purpose reinforcement learning algorithm can learn, tabula rasawithout domain-specific human knowledge or data, as evidenced by the same algorithm succeeding in multiple domainssuperhuman performance across multiple challenging games.

A landmark for artificial intelligence was achieved in 1997 when Deep Blue defeated the human world chess champion (1). Computer chess programs continued to progress steadily beyond human level in the following two decades. These programs evaluate positions by using handcrafted features and carefully tuned weights, constructed by strong human players and programmers, combined with a high-performance alpha-beta search that expands a vast search tree by using a large number of clever heuristics and domain-specific adaptations. In (10) we describe these augmentations, focusing on the 2016 Top Chess Engine Championship (TCEC) season 9 world champion Stockfish (11); other strong chess programs, including Deep Blue, use very similar architectures (1, 12).

In terms of game tree complexity, shogi is a substantially harder game than chess (13, 14): It is played on a larger board with a wider variety of pieces; any captured opponent piece switches sides and may subsequently be dropped anywhere on the board. The strongest shogi programs, such as the 2017 Computer Shogi Association (CSA) world champion Elmo, have only recently defeated human champions (15). These programs use an algorithm similar to those used by computer chess programs, again based on a highly optimized alpha-beta search engine with many domain-specific adaptations.

AlphaZero replaces the handcrafted knowledge and domain-specific augmentations used in traditional game-playing programs with deep neural networks, a general-purpose reinforcement learning algorithm, and a general-purpose tree search algorithm.

Instead of a handcrafted evaluation function and move-ordering heuristics, AlphaZero uses a deep neural network (p, v) = f(s) with parameters . This neural network f(s) takes the board position s as an input and outputs a vector of move probabilities p with components pa = Pr(a|s) for each action a and a scalar value v estimating the expected outcome z of the game from position s, . AlphaZero learns these move probabilities and value estimates entirely from self-play; these are then used to guide its search in future games.

Instead of an alpha-beta search with domain-specific enhancements, AlphaZero uses a general-purpose Monte Carlo tree search (MCTS) algorithm. Each search consists of a series of simulated games of self-play that traverse a tree from root state sroot until a leaf state is reached. Each simulation proceeds by selecting in each state s a move a with low visit count (not previously frequently explored), high move probability, and high value (averaged over the leaf states of simulations that selected a from s) according to the current neural network f. The search returns a vector representing a probability distribution over moves, a = Pr(a|sroot).

The parameters of the deep neural network in AlphaZero are trained by reinforcement learning from self-play games, starting from randomly initialized parameters . Each game is played by running an MCTS from the current position sroot = st at turn t and then selecting a move, at ~ t, either proportionally (for exploration) or greedily (for exploitation) with respect to the visit counts at the root state. At the end of the game, the terminal position sT is scored according to the rules of the game to compute the game outcome z: 1 for a loss, 0 for a draw, and +1 for a win. The neural network parameters are updated to minimize the error between the predicted outcome vt and the game outcome z and to maximize the similarity of the policy vector pt to the search probabilities t. Specifically, the parameters are adjusted by gradient descent on a loss function l that sums over mean-squared error and cross-entropy losses(1)where c is a parameter controlling the level of L2 weight regularization. The updated parameters are used in subsequent games of self-play.

The AlphaZero algorithm described in this paper [see (10) for the pseudocode] differs from the original AlphaGo Zero algorithm in several respects.

AlphaGo Zero estimated and optimized the probability of winning, exploiting the fact that Go games have a binary win or loss outcome. However, both chess and shogi may end in drawn outcomes; it is believed that the optimal solution to chess is a draw (1618). AlphaZero instead estimates and optimizes the expected outcome.

The rules of Go are invariant to rotation and reflection. This fact was exploited in AlphaGo and AlphaGo Zero in two ways. First, training data were augmented by generating eight symmetries for each position. Second, during MCTS, board positions were transformed by using a randomly selected rotation or reflection before being evaluated by the neural network, so that the Monte Carlo evaluation was averaged over different biases. To accommodate a broader class of games, AlphaZero does not assume symmetry; the rules of chess and shogi are asymmetric (e.g., pawns only move forward, and castling is different on kingside and queenside). AlphaZero does not augment the training data and does not transform the board position during MCTS.

In AlphaGo Zero, self-play games were generated by the best player from all previous iterations. After each iteration of training, the performance of the new player was measured against the best player; if the new player won by a margin of 55%, then it replaced the best player. By contrast, AlphaZero simply maintains a single neural network that is updated continually rather than waiting for an iteration to complete. Self-play games are always generated by using the latest parameters for this neural network.

As in AlphaGo Zero, the board state is encoded by spatial planes based only on the basic rules for each game. The actions are encoded by either spatial planes or a flat vector, again based only on the basic rules for each game (10).

AlphaGo Zero used a convolutional neural network architecture that is particularly well-suited to Go: The rules of the game are translationally invariant (matching the weight-sharing structure of convolutional networks) and are defined in terms of liberties corresponding to the adjacencies between points on the board (matching the local structure of convolutional networks). By contrast, the rules of chess and shogi are position dependent (e.g., pawns may move two steps forward from the second rank and promote on the eighth rank) and include long-range interactions (e.g., the queen may traverse the board in one move). Despite these differences, AlphaZero uses the same convolutional network architecture as AlphaGo Zero for chess, shogi, and Go.

The hyperparameters of AlphaGo Zero were tuned by Bayesian optimization. In AlphaZero, we reuse the same hyperparameters, algorithm settings, and network architecture for all games without game-specific tuning. The only exceptions are the exploration noise and the learning rate schedule [see (10) for further details].

We trained separate instances of AlphaZero for chess, shogi, and Go. Training proceeded for 700,000 steps (in mini-batches of 4096 training positions) starting from randomly initialized parameters. During training only, 5000 first-generation tensor processing units (TPUs) (19) were used to generate self-play games, and 16 second-generation TPUs were used to train the neural networks. Training lasted for approximately 9 hours in chess, 12 hours in shogi, and 13 days in Go (see table S3) (20). Further details of the training procedure are provided in (10).

Figure 1 shows the performance of AlphaZero during self-play reinforcement learning, as a function of training steps, on an Elo (21) scale (22). In chess, AlphaZero first outperformed Stockfish after just 4 hours (300,000 steps); in shogi, AlphaZero first outperformed Elmo after 2 hours (110,000 steps); and in Go, AlphaZero first outperformed AlphaGo Lee (9) after 30 hours (74,000 steps). The training algorithm achieved similar performance in all independent runs (see fig. S3), suggesting that the high performance of AlphaZeros training algorithm is repeatable.

Elo ratings were computed from games between different players where each player was given 1 s per move. (A) Performance of AlphaZero in chess compared with the 2016 TCEC world champion program Stockfish. (B) Performance of AlphaZero in shogi compared with the 2017 CSA world champion program Elmo. (C) Performance of AlphaZero in Go compared with AlphaGo Lee and AlphaGo Zero (20 blocks over 3 days).

We evaluated the fully trained instances of AlphaZero against Stockfish, Elmo, and the previous version of AlphaGo Zero in chess, shogi, and Go, respectively. Each program was run on the hardware for which it was designed (23): Stockfish and Elmo used 44 central processing unit (CPU) cores (as in the TCEC world championship), whereas AlphaZero and AlphaGo Zero used a single machine with four first-generation TPUs and 44 CPU cores (24). The chess match was played against the 2016 TCEC (season 9) world champion Stockfish [see (10) for details]. The shogi match was played against the 2017 CSA world champion version of Elmo (10). The Go match was played against the previously published version of AlphaGo Zero [also trained for 700,000 steps (25)]. All matches were played by using time controls of 3 hours per game, plus an additional 15 s for each move.

In Go, AlphaZero defeated AlphaGo Zero (9), winning 61% of games. This demonstrates that a general approach can recover the performance of an algorithm that exploited board symmetries to generate eight times as much data (see fig. S1).

In chess, AlphaZero defeated Stockfish, winning 155 games and losing 6 games out of 1000 (Fig. 2). To verify the robustness of AlphaZero, we played additional matches that started from common human openings (Fig. 3). AlphaZero defeated Stockfish in each opening, suggesting that AlphaZero has mastered a wide spectrum of chess play. The frequency plots in Fig. 3 and the time line in fig. S2 show that common human openings were independently discovered and played frequently by AlphaZero during self-play training. We also played a match that started from the set of opening positions used in the 2016 TCEC world championship; AlphaZero won convincingly in this match, too (26) (fig. S4). We played additional matches against the most recent development version of Stockfish (27) and a variant of Stockfish that uses a strong opening book (28). AlphaZero won all matches by a large margin (Fig. 2).

(A) Tournament evaluation of AlphaZero in chess, shogi, and Go in matches against, respectively, Stockfish, Elmo, and the previously published version of AlphaGo Zero (AG0) that was trained for 3 days. In the top bar, AlphaZero plays white; in the bottom bar, AlphaZero plays black. Each bar shows the results from AlphaZeros perspective: win (W; green), draw (D; gray), or loss (L; red). (B) Scalability of AlphaZero with thinking time compared with Stockfish and Elmo. Stockfish and Elmo always receive full time (3 hours per game plus 15 s per move); time for AlphaZero is scaled down as indicated. (C) Extra evaluations of AlphaZero in chess against the most recent version of Stockfish at the time of writing (27) and against Stockfish with a strong opening book (28). Extra evaluations of AlphaZero in shogi were carried out against another strong shogi program, Aperyqhapaq (29), at full time controls and against Elmo under 2017 CSA world championship time controls (10 min per game and 10 s per move). (D) Average result of chess matches starting from different opening positions, either common human positions (see also Fig. 3) or the 2016 TCEC world championship opening positions (see also fig. S4), and average result of shogi matches starting from common human positions (see also Fig. 3). CSA world championship games start from the initial board position. Match conditions are summarized in tables S8 and S9.

AlphaZero plays against (A) Stockfish in chess and (B) Elmo in shogi. In the left bar, AlphaZero plays white, starting from the given position; in the right bar, AlphaZero plays black. Each bar shows the results from AlphaZeros perspective: win (green), draw (gray), or loss (red). The percentage frequency of self-play training games in which this opening was selected by AlphaZero is plotted against the duration of training, in hours.

Table S6 shows 20 chess games played by AlphaZero in its matches against Stockfish. In several games, AlphaZero sacrificed pieces for long-term strategic advantage, suggesting that it has a more fluid, context-dependent positional evaluation than the rule-based evaluations used by previous chess programs.

In shogi, AlphaZero defeated Elmo, winning 98.2% of games when playing black and 91.2% overall. We also played a match under the faster time controls used in the 2017 CSA world championship and against another state-of-the-art shogi program (29); AlphaZero again won both matches by a wide margin (Fig. 2).

Table S7 shows 10 shogi games played by AlphaZero in its matches against Elmo. The frequency plots in Fig. 3 and the time line in fig. S2 show that AlphaZero frequently plays one of the two most common human openings but rarely plays the second, deviating on the very first move.

AlphaZero searches just 60,000 positions per second in chess and shogi, compared with 60 million for Stockfish and 25 million for Elmo (table S4). AlphaZero may compensate for the lower number of evaluations by using its deep neural network to focus much more selectively on the most promising variations (Fig. 4 provides an example from the match against Stockfish)arguably a more humanlike approach to searching, as originally proposed by Shannon (30). AlphaZero also defeated Stockfish when given as much thinking time as its opponent (i.e., searching as many positions) and won 46% of games against Elmo when given as much time (i.e., searching as many positions) (Fig. 2). The high performance of AlphaZero with the use of MCTS calls into question the widely held belief (31, 32) that alpha-beta search is inherently superior in these domains.

The search is illustrated for a position (inset) from game 1 (table S6) between AlphaZero (white) and Stockfish (black) after 29. ... Qf8. The internal state of AlphaZeros MCTS is summarized after 102, ..., 106 simulations. Each summary shows the 10 most visited states. The estimated value is shown in each state, from whites perspective, scaled to the range [0, 100]. The visit count of each state, relative to the root state of that tree, is proportional to the thickness of the border circle. AlphaZero considers 30. c6 but eventually plays 30. d5.

The game of chess represented the pinnacle of artificial intelligence research over several decades. State-of-the-art programs are based on powerful engines that search many millions of positions, leveraging handcrafted domain expertise and sophisticated domain adaptations. AlphaZero is a generic reinforcement learning and search algorithmoriginally devised for the game of Gothat achieved superior results within a few hours, searching as many positions, given no domain knowledge except the rules of chess. Furthermore, the same algorithm was applied without modification to the more challenging game of shogi, again outperforming state-of-the-art programs within a few hours. These results bring us a step closer to fulfilling a longstanding ambition of artificial intelligence (3): a general game-playing system that can learn to master any game.

F.-H. Hsu, Behind Deep Blue: Building the Computer That Defeated the World Chess Champion (Princeton Univ., 2002).

C. J. Maddison, A. Huang, I. Sutskever, D. Silver, paper presented at the International Conference on Learning Representations 2015, San Diego, CA, 7 to 9 May 2015.

D. N. L. Levy, M. Newborn, How Computers Play Chess (Ishi Press, 2009).

V. Allis, Searching for solutions in games and artificial intelligence, Ph.D. thesis, Transnational University Limburg, Maastricht, Netherlands (1994).

W. Steinitz, The Modern Chess Instructor (Edition Olms, 1990).

E. Lasker, Common Sense in Chess (Dover Publications, 1965).

J. Knudsen, Essential Chess Quotations (iUniverse, 2000).

N. P. Jouppi, C. Young, N. Patil, D. Patterson, G. Agrawal, R. Bajwa, S. Bates, S. Bhatia, N. Boden, A. Borchers, R. Boyle, P. Cantin, C. Chao, C. Clark, J. Coriell, M. Daley, M. Dau, J. Dean, B. Gelb, T. V. Ghaemmaghami, R. Gottipati, W. Gulland, R. Hagmann, C. R. Ho, D. Hogberg, J. Hu, R. Hundt, D. Hurt, J. Ibarz, A. Jaffey, A. Jaworski, A. Kaplan, H. Khaitan, D. Killebrew, A. Koch, N. Kumar, S. Lacy, J. Laudon, J. Law, D. Le, C. Leary, Z. Liu, K. Lucke, A. Lundin, G. MacKean, A. Maggiore, M. Mahony, K. Miller, R. Nagarajan, R. Narayanaswami, R. Ni, K. Nix, T. Norrie, M. Omernick, N. Penukonda, A. Phelps, J. Ross, M. Ross, A. Salek, E. Samadiani, C. Severn, G. Sizikov, M. Snelham, J. Souter, D. Steinberg, A. Swing, M. Tan, G. Thorson, B. Tian, H. Toma, E. Tuttle, V. Vasudevan, R. Walter, W. Wang, E. Wilcox, D. H. Yoon, in Proceedings of the 44th Annual International Symposium on Computer Architecture, Toronto, Canada, 24 to 28 June 2017 (Association for Computing Machinery, 2017), pp. 112.

R. Coulom, in Proceedings of the Sixth International Conference on Computers and Games, Beijing, China, 29 September to 1 October 2008 (Springer, 2008), pp. 113124.

O. Arenz, Monte Carlo chess, masters thesis, Technische Universitt Darmstadt (2012).

O. E. David, N. S. Netanyahu, L. Wolf, in Artificial Neural Networks and Machine LearningICANN 2016, Part II, Barcelona, Spain, 6 to 9 September 2016 (Springer, 2016), pp. 8896.

T. Marsland, Encyclopedia of Artificial Intelligence, S. Shapiro, Ed. (Wiley, 1987).

T. Kaneko, K. Hoki, in Advances in Computer Games 13th International Conference, ACG 2011, Revised Selected Papers, Tilburg, Netherlands, 20 to 22 November 2011 (Springer, 2012), pp. 158169.

M. Lai, Giraffe: Using deep reinforcement learning to play chess, masters thesis, Imperial College London (2015).

R. Ramanujan, A. Sabharwal, B. Selman, in Proceedings of the 26th Conference on Uncertainty in Artificial Intelligence (UAI 2010), Catalina Island, CA, 8 to 11 July (AUAI Press, 2010).

K. He, X. Zhang, S. Ren, J. Sun, in Computer Vision ECCV 2016, 14th European Conference, Part IV, Amsterdam, Netherlands, 11 to 14 October 2016 (Springer, 2016), pp. 630645.

Acknowledgments: We thank M. Sadler for analyzing chess games; Y. Habu for analyzing shogi games; L. Bennett for organizational assistance; B. Konrad, E. Lockhart, and G. Ostrovski for reviewing the paper; and the rest of the DeepMind team for their support. Funding: All research described in this report was funded by DeepMind and Alphabet. Author contributions: D.S., J.S., T.H., and I.A. designed the AlphaZero algorithm with advice from T.G., A.G., T.L., K.S., M.Lai, L.S., and M.Lan.; J.S., I.A., T.H., and M.Lai implemented the AlphaZero program; T.H., J.S., D.S., M.Lai, I.A., T.G., K.S., D.K., and D.H. ran experiments and/or analyzed data; D.S., T.H., J.S., and D.H. managed the project; D.S., J.S., T.H., M.Lai, I.A., and D.H. wrote the paper. Competing interests: DeepMind has filed the following patent applications related to this work: PCT/EP2018/063869, US15/280,711, and US15/280,784. Data and materials availability: A full description of the algorithm in pseudocode as well as details of additional games between AlphaZero and other programs is available in the supplementary materials.

Follow this link:
A general reinforcement learning algorithm that masters ...

What would it be like to be a conscious AI? We might never know. – MIT Technology Review

Humans are active listeners; we create meaning where there is none, or none intended. It is not that the octopuss utterances make sense, but rather that the islander can make sense of them, Bender says.

For all their sophistication, todays AIs are intelligent in the same way a calculator might be said to be intelligent: they are both machines designed to convert input into output in ways that humanswho have mindschoose to interpret as meaningful. While neural networks may be loosely modeled on brains, the very best of them are vastly less complex than a mouses brain.

And yet, we know that brains can produce what we understand to be consciousness. If we can eventually figure out how brains do it, and reproduce that mechanism in an artificial device, then surely a conscious machine might be possible?

When I was trying to imagine Roberts world in the opening to this essay, I found myself drawn to the question of what consciousness means to me. My conception of a conscious machine was undeniablyperhaps unavoidablyhuman-like. It is the only form of consciousness I can imagine, as it is the only one I have experienced. But is that really what it would be like to be a conscious AI?

Its probably hubristic to think so. The project of building intelligent machines is biased toward human intelligence. But the animal world is filled with a vast range of possible alternatives, from birds to bees to cephalopods.

A few hundred years ago the accepted view, pushed by Ren Descartes, was that only humans were conscious. Animals, lacking souls, were seen as mindless robots. Few think that today: if we are conscious, then there is little reason not to believe that mammals, with their similar brains, are conscious too. And why draw the line around mammals? Birds appear to reflect when they solve puzzles. Most animals, even invertebrates like shrimp and lobsters, show signs of feeling pain, which would suggest they have some degree of subjective consciousness.

But how can we truly picture what that must feel like? As the philosopher Thomas Nagel noted, it must be like something to be a bat, but what that is we cannot even imaginebecause we cannot imagine what it would be like to observe the world through a kind of sonar. We can imagine what it might be like for us to do this (perhaps by closing our eyes and picturing a sort of echolocation point cloud of our surroundings), but thats still not what it must be like for a bat, with its bat mind.

View post:
What would it be like to be a conscious AI? We might never know. - MIT Technology Review

AlphaZero to analyse no-castling match of the champions – Chessbase News

Press release

For the first time, the internationally renowned chess tournament Dortmund Chess Days will host a special match between legends Vladimir Kramnik and Viswanathan Anand playing the No-Castlingchess format. The rich, creative possibilities of this chess variant were recently explored by the ground-breaking artificial intelligence system AlphaZero, created by world-leading AI company DeepMind. Online audiences will now get to experience novel AlphaZero insights first hand in the post-match commentary of the no castling tournament, as part of DeepMinds support for Dortmund Chess Days.

Demis Hassabis [pictured], DeepMind Founder and CEO, says:

Its been incredibly exciting to see world-class players like Vladimir Kramnik use AlphaZero to explore new possibilities for the game of chess. Im looking forward to seeing two former world champions play the no castling variant in Dortmund and hope the games - and AlphaZeros insights - inspire chess players everywhere.

Event director Carsten Hensel said:

The Dortmund Chess Days has found a growing audience online, which we would like to develop in the coming years. We hope the inclusion of the no-castling variant and the inclusion of cutting-edge AI analysis from AlphaZero will further reinforce Dortmund as a ground-breaking tournament and will attract people from around the world to watch what will surely be a remarkable and historic match.

The tournament will see Vladimir Kramnik and Viswanathan Anand play four games against each other with classical thinking time, but with no option of castling. This tiny rule change will force the players to deviate from memorized opening lines, encouraging new creative play without deviating from the games familiar rules and patterns.

Plans for the 2022 event, which will be sponsored by DeepMind, are already underway with the hope that the no-castling format will become a regular fixture and an audience favourite, thanks to dynamic and entertaining play.

Master Class Vol. 12: Viswanathan Anand

This DVD allows you to learn from the example of one of the best players in the history of chess and from the explanations of the authors how to successfully organise your games strategically, consequently how to keep your opponent permanently under press

Anand and Kramnik playing the World Chess Championship in 2008

DeepMind is a multidisciplinary team of scientists, engineers, machine learning experts and more, working together to research and build safe AI systems that learn how to solve problems and advance scientific discovery for all.

Having developed AlphaGo, the first program to beat a world champion at the complex game of Go, DeepMind has published over 1000 research papers including more than a dozen in the journals Nature and Science and achieved breakthrough results in many challenging AI domains from StarCraft II to protein folding.

DeepMind was founded in London in 2010, and joined forces with Google in 2014 to accelerate its work. Since then, its community has expanded to include teams in Alberta, Montreal, Paris, New York and Mountain View in California.

The purpose of the association founded in the year 2019 is the promotion of chess. IPS is realizing this goal by the organization of chess events in the fields of sport, art, science, education, cultural and chess history. The outstanding project of the IPS is the Sparkassen Chess Trophy International Dortmund Chess Days, with its famous history since 1973. TheIPS is developing a modern concept and pay considerable attention to the digital requirements of today, especially regarding the topic of chess and its modern development.

Master Class Vol.11: Vladimir Kramnik

This DVD allows you to learn from the example of one of the best players in the history of chess and from the explanations of the authors (Pelletier, Marin, Mller and Reeh) how to successfully organise your games strategically, consequently how to keep y

See the original post here:
AlphaZero to analyse no-castling match of the champions - Chessbase News

How This Startup Aims to Disrupt Copywriting Forever – Inc.

Writer's block is too often a big impediment to effective copy, which means it lowers a scribe's performance in the eyes of a client or employer. Every writer may go through it, but there aremore demands placed on storytellers given today's breakneck speed of digital marketing and social media.

Copy.ai taps into the power of artificial intelligence (A.I.) to give professional wordsmiths, editors, marketers, and even students the ability to review several written versions of what they'd like to write about to overcome the psychological barrier of writer's block. This tool also eliminates those annoying errors and redundant phrases that glare at discerning readers.

Chris Lu and Paul Yacoubian founded Copy.ai to give content creators an ability to optimize written text. And democratize access to creativity.

Having tried it, I found the A.I.-powered tool turns concepts into conversational and relatable text. The site can optimize messages including product descriptions, blogs, social media posts, landing pages, and everything else with text.

A user types a description and the tool generates almost a dozen versions of possible headlines, intros, and bodies, and even Valentine's Day greetings. For example, the A.I. can ideate different versions of a paragraph from which to choose. Even if you only input a few words to describe the subject matter.

Interestingly, the tool seems like a godsend for procrastinating students who pull all-nighters.

Disrupting a shrinking industry

According to the Bureau of Labor Statistics, there are 131,200 writers and authors in the United States. The average wage is $30 an hour.

There is an expected 2 percentdecline in employment (equaling 3,100 fewer jobs) from 2019 to 2029.

The job losses may be exacerbated by A.I. and machine learning, since innovators are training these emerging technologies to completely replace human scribes. Whether that's possible remains to be seen. (This writer thinks that will definitely happen sooner or later.)

About six or seven years ago, there were primitive tools that attempted to rewrite communications that were copied and pasted from other websites. These were intended to pass plagiarism checks. But the tech in those days produced such bad output as to make them unusable.

Fast-forward to today and A.I. wordsmiths can now craft intelligent phrases that seem more humanly than your average scribe's. Therefore, the future of copywriting is here.

Just watch how IBM Watson destroyed human competitors on Jeopardy. And that took place a decade ago. Similarly, it's now almost impossible for the best chess players to defeat Google's AlphaZero.

It may be humbling, but truth is truth.

The human touch

In music, an emotive rendition of The Sound of Music or other classics makes the sheet come alive. Can computers match the rhythmic artistry of the masters? Time will tell.

When it comes to writing styles, there's an overwhelming preference for simple, digestible language. Nonfiction is the dominant force that has drastically reduced the existence of fiction-writing artistry.

With Copy.ai, it's difficult for an editor or audience to determine that the communication they're seeing on a device screen is crafted by a nonsentient entity. The output text is relatable, which creates the illusion of a personal touch. The A.I.'s phrases and syntax are not mechanical at all.

"Converting ideas into text is where this magic happens.I can see this supporting peoplewhosefirst language might not be Englishby validating their ideas. They often know exactly what to say but are unsure how it might land. Having a variety of options will help them craft the perfect piece. --Tarik Sehovic, growth adviser, Copy.Ai

A natural progression

One thing is certain: Brands, marketers, and advertisers will love the increasing capabilities of A.I. and machine learning.

Should copywriters and editors fear the same? They should remember that readers are their customers and A.I. tools lead to better drafts. Audiences are short on time and therefore impatient with badly written text.

Writers and authors must adapt with the times or risk being relegated to a bygone era. A consumer-facing wordsmith should use the best tools for engaging audiences.

There's a reason why stone tablets, scrolls, pencils, and typewriters are obsolete. These don't add enough value in the Information Age. What does add value is efficient and effective messages that are consumed by target demographics.

Cognitive systems are radically changing what we think of as "content." And traditional forms are being eclipsed by smarter, interactive mediums.

The opinions expressed here by Inc.com columnists are their own, not those of Inc.com.

Follow this link:
How This Startup Aims to Disrupt Copywriting Forever - Inc.

Between Games and Apocalyptic Robots: Considering Near-Term Societal Risks of Reinforcement – Medium

With many of us stuck at home this past year, weve seen a surge in the popularity of video games. That trend hasnt been limited to humans. DeepMind and Google AI both released results from their Atari playing AIs, which have taught themselves to play over fifty Atari games from scratch, with no provided rules or guidelines. The unique thing about these new results is how general the AI agent is. While previous efforts have achieved human performance on the games they were trained to play, DeepMinds new AI Agent, MuZero could teach itself to beat humans at Atari games it had never encountered in under a day. If this reminds you of AlphaZero which taught itself to play Go then Chess well enough to outperform world champions, thats because it demonstrates an advance in the same suite of algorithms, a class of machine learning called Reinforcement Learning (RL).

While traditional machine learning parses out its model of the world (typically a small world pertaining only to the problem its designed to solve) from swathes of data, RL is real-time observation based. This means RL learns its model primarily through trial and error interactions with its environment, not by pulling out correlations from data representing a historical snapshot of it. In the RL framework, each interaction with the environment is an opportunity to build towards an overarching goal, referred to as a reward. An RL agent is trained to make a sequence of decisions on how to interact with its environment that will ultimately maximize its reward (i.e. help it win the game).

This unique iterative learning paradigm allows the AI model to change and adapt to its environment, making RL an attractive solution for open-ended, real-world problem-solving. It also makes it a leading candidate for artificial general intelligence (AGI) and has some researchers concerned about the rise of truly autonomous AI that does not align with human values. Nick Bostrom first posed what is now the canonical example of this risk among AI Safety researchers a paperclip robot with one goal: optimize the production efficiency of paperclips. With no other specifications, the agent quickly drifts from optimizing its own paperclip factory to commandeering food production supply chains for the paperclip making cause. It proceeds to place paperclips above all other human needs until all thats left of the world is a barren wasteland covered end to end with unused paper clips. The takeaway? Extremely literal problem solving combined with inaccurate problem definition can lead to bad outcomes.

This rogue AGI (albeit in more high-stakes incarnations like weapons management) is the type of harm usually thought of when trying to make RL safe in the context of society. However, between an autonomous agent teaching itself games in the virtual world and an intelligent but misguided AI putting humanity in existential risk lay a multitude of sociotechnical concerns. As RL is being rolled out in domains ranging from social media to medicine and education, its time we seriously think about these near-term risks.

How the paperclip problem will play out in the near term is likely to be rather subtle. For example, medical treatment protocols are currently popular candidates for RL modeling; they involve a series of decisions (which treatment options to try) with uncertain outcomes (different options work better for different people) that all connect to the eventual outcome (patient health). One such study tried to identify the best treatment decisions to avoid sepsis in ICU patients based off of multitudes of data, including medical histories, clinical charts and doctors notes. Their first iteration was an astounding success. With very high accuracy, it identified treatment paths that resulted in patient death. However, upon further examination and consultation with clinicians it turned out that though the agent had been allowed to learn from a plethora of potentially relevant treatment considerations, it had latched onto only one main indicator for death whether or not a chaplain was called. The goal of the system was to flag treatment paths that led to deaths, and in a very literal sense thats what it did. Clinicians only called a chaplain when a patient presented as close to death.

Youll notice that in this example, the incredibly literal yet unhelpful solution the RL agent was taking was discovered by the researchers. This is no accident. The field of modern medicine is built around the reality that connections between treatment and outcomes typically have no known causal explanations. Aspirin, for example, was used as an anti-inflammatory for over seventy years before we had any insight into why it worked. This lack of causal understanding is sometimes referred to as intellectual debt; if we cant describe why something works, we may not be able to predict when or how it will fail. Medicine has grown around this fundamental uncertainty. Through strict codes of ethics, industry standards, and regulatory infrastructure (i.e. clinical trials), the field has developed the scaffolding to minimize the accompanying harms. RL systems aiming to help with diagnosis and treatment have to develop within this infrastructure. Compliance with the machinery medicine has around intellectual debt is more likely to result in slow and steady progress, without colossal misalignment. This same level of oversight does not apply to fields like social media, the potential harms of which are hard to pin down and which have virtually no regulatory scaffolding in place.

We may have already experienced some of the early harms of RL based algorithms in complex domains. In 2018 YouTube engineers released a paper describing an RL addition to their recommendation algorithm that increased daily watch time by 6 million hours in the beta testing phase. Meanwhile, anecdotal accounts of radicalization through YouTube rabbit holes of increasingly conspiratorial content (e.g., NYTimes reporting on YouTubes role in empowering Brazils far right) were on the rise. While it is impossible to know exactly which algorithms powered the platforms recommendations at the time, this rabbit hole effect would be a natural result of an RL algorithm trying to maximize view time by nudging users towards increasingly addictive content.

In the near future, dynamic manipulation of this sort may end up at odds with established protections under the law. For example, Facebook has recently been put under scrutiny by the Department of Housing and Urban Development for discriminatory housing advertisements. The HUD suit alleges that even without explicit targeting filters that amount to the exclusion of protected groups, its algorithms are likely to hide ads from users whom the system determines are unlikely to engage with the ad, even if the advertiser explicitly wants to reach those users. Given the types of (non-RL) ML algorithms FB currently uses in advertising, proving this disparate impact would be a matter of examining the data and features used to train the algorithm. While the current lack of transparency makes this challenging, it is fundamentally possible to roll out benchmarks capable of flagging such discrimination.

If advertising were instead powered by RL, benchmarks would not be enough. An RL advertising algorithm tasked with ensuring it does not discriminate against protected classes, could easily end up making it look as though it were not discriminating instead. If the RL agent were optimized for profit and the practice of discrimination was profitable, the RL agent would be incentivized to find loopholes under which it could circumvent protections. Just like in the sepsis treatment case, the system is likely to find a shortcut towards reaching its objective, only in this case the lack of regulatory scaffolding makes it unlikely this failure will be picked up. The propensity of RL to adapt to meet metrics, while skirting over intent, will make it challenging to tag such undesirable behavior. This situation is further complicated by our heavy reliance on data as a means to flag potential bias in ML systems.

Unlike RL, traditional machine learning is innately static; it takes in loads of data, parses it for correlations, and outputs a model. Once a system has been trained, updating it to accommodate a new environment or changes to the status quo requires repeating most or all of that initial training with updated data. Even for firms that have the computing power to make such retraining seamless, the reliance on data has allowed an in for transparency. The saying goes, machine learning is like money laundering for bias. If an ML system is trained using biased or unrepresentative data, its model of the world will reflect that. In traditional machine learning, we can at least follow the marked bills and point out when an ML system is going to be prone to discrimination by examining its training data. We may even be able to preprocess the data before training the system in an attempt to preemptively correct for bias.

Since RL is generally real-time observation-based rather than training data-based, this follow-the-data approach to algorithmic oversight does not apply. There is no controlled input data to help us anticipate or correct for where an RL system can go wrong before we set it loose in the world.

In certain domains, this lack of data-born insight may not be too problematic. The more we can specify what the moving parts of a given application are and the ways in which they may failbe it through an understanding of the domain or regulatory scaffoldingthe safer it is for us to use RL. DeepMinds use of RL to lower the energy costs of its computing centers, a process ultimately governed by the laws of physics, deserves less scrutiny than the RL based K-12 curriculum generator Googles Ed Chi views as a near-term goal of the field. The harder it is to describe what success looks like within a given domain, the more prone to bad outcomes it is. This is true of all ML systems, but even more crucial for RL systems that cannot be meaningfully validated ahead of use. As regulators, we need to think about which domains need more regulatory scaffolding to minimize the fallout from our intellectual debt, while allowing for the immense promise of algorithms that can learn from their mistakes.

Follow this link:
Between Games and Apocalyptic Robots: Considering Near-Term Societal Risks of Reinforcement - Medium