Archive for the ‘Alphago’ Category

PNYA Post Break Will Explore the Relationship Between Editors and Assistants – Creative Planet Network

n honor of Womens History Month, Post Break, Post New York Alliance (PNYA)s free webinar series, will examine the way two top female editors have worked with their assistants to deliver shows for HBO, Freeform and others.

By ArtisansPR Published: March 23, 2021

Free video conference slated for Thursday, March 25th at 4:00 p.m. EDT

NEW YORK CITYA strong working relationship between the editor and her assistants is crucial to successfully completing films and television shows. In honor of Womens History Month, Post Break, Post New York Alliance (PNYA)s free webinar series, will examine the way two top female editors have worked with their assistants to deliver shows for HBO, Freeform and others.

Agns Challe-Grandits, editor of the upcoming Freeform series Single, Drunk Female and her assistant, Tracy Nayer will join Shelby Siegel, Emmy and ACE award winner for the HBO series The Jinx: The Life and Deaths of Robert Durst and her assistant, JiYe Kim, to discuss collaboration, how they organize their projects and how editors and assistants support one another. The discussion will be moderated by Post Producer Claire Shanley.

The session is scheduled for Thursday, March 25th at 4:00pm EDT. Following the webinar, attendees will have an opportunity to join small, virtual breakout groups for discussion and networking.

Panelists

Agns Grandits has decades of experience as a film and television editor. Her current project is Single Drunk Female, a new, half-hour comedy for Freeform. Her previous television credits include P. Valley and SweetBitter for STARZ, Divorce for HBO, Odd Mom Out for Bravo and The Breaks for VH1. She also worked for Showtime on The Affair and Nurse Jackie. In addition, she edited The Jim Gaffigan Show for TV Land, Gracepoint for Fox, an episode on the final season of Bored to Death for HBO, and 100 Centre Street, directed by Sydney Lumet for A&E. Her credits with HBO also include Sex and the City and The Wire.

Tracy Nayer has been an Assistant Editor for more than ten years and has been assisting Agns Grandits for five. She began her career in editorial finishing at a large post-production studio.

Shelby Siegel is an Emmy award-winning film and television editor who has worked in New York for more than 20 years. Her credits include Andrew Jareckis Capturing the Friedmans and All Good Things, Jonathan Caouettes Tarnation, and Gary Hustwits Helvetica and Urbanized. She won Emmy and ACE awards for HBOs acclaimed six-part series The Jinx: The Life and Deaths of Robert Durst. Most recently, she edited episodes of Quantico (ABC), High Maintenance (HBO) and The Deuce (HBO). She began her career working under some of the industrys top directors, including Paul Haggis (In the Valley of Elah), Mike Nichols (Charlie Wilsons War), and Ang Lee on his Oscar-winning films, Crouching Tiger, Hidden Dragon and Brokeback Mountain. She also worked on the critically acclaimed series The Wire.

JiYe Kim began her career in experimental films, working with Anita Thacher and Barbara Hammer. Her first credit as an assistant editor came for Alphago (2017). Her most recent credits include High Maintenance, The Deuce, Her Smell and Share.

Moderator

Claire Shanley is a Post Producer whose recent projects include The Plot Against America and The Deuce. Her background also includes post facility and technical management roles. She served as Managing Director at Sixteen19 and Technical Director at Broadway Video. She Co-Chairs the Board of Directors of the NYC LGBT Center and serves on the Advisory Board of NYWIFT (NY Women in Film & Television).

When: Thursday, March 25, 2021, 4:00pm EDT

Title: The E&A Team

REGISTER HERE

Sound recordings of past Post Break sessions are available here: https://www.postnewyork.org/page/PNYAPodcasts

Past Post Break sessions in video blog format are available here: https://www.postnewyork.org/blogpost/1859636/Post-Break

About Post New York Alliance (PNYA)

The Post New York Alliance (PNYA) is an association of film and television post-production facilities, labor unions and post professionals operating in New York State. The PNYAs objective is to create jobs by: 1) extending and improving the New York State Tax Incentive Program; 2) advancing the services the New York Post Production industry provides; and 3) creating avenues for a diverse talent pool to enter into The Industry.

http://www.pnya.org

Read more:
PNYA Post Break Will Explore the Relationship Between Editors and Assistants - Creative Planet Network

You need to Know about the History of Artificial Intelligence – Technotification

The concept of a machine that thinks derives from ancient Greece. However, as the introduction and the events and achievements for the development of artificial intelligence are significant events and related to some of the subjects covered in this article:

1950: Computing Machinery and Knowledge is written by Alan Turing. In the article, Turingknown for cracking the ENIGMA code of Nazi society during the Second World Warproposes to respond to the question do machines think? and implements the Turing Test (link exists outside of IBM), which would decide if a robot can show the same intellect as the human. Since then, the importance of the Turing test has been discussed.

1956: at the first-ever Dartmouth College AI meeting, John McCarthy coins the word artificial intelligence. (The Lisp language was invented by McCarthy.) Later that year the Logic Theorist, the first AI software in use was developed by Allen Newell, J.C. Shaw, and Herbert Simon.

1967: The first computer-based on a neural network, Frank Rosenblatt produces the Mark 1 Perceptron which is learned through checking and mistakes. A year later, Marvin Minsky and Seymour Papert released a book called Perceptrons, the seminal work of neural entries and an argument against future research ventures of neural networks for at least a while.

1980: Backpropagated neural networks network training algorithms are commonly used in AI applications.

1997: IBMs Deep Blue champion Garry Kasparov defeats than in the chess match (and rematch).

2011: Ken Jennings and Brad Rutter at Jeopardy beating winners of IBM Watson!

2015: The Minwa Supercomputer of Baidu uses a special deep neural network called a neural network to recognize and categorize images that are more precise than the human average.

2016: the AlphaGo software from DeepMind, guided by a deep neural network beats in a five-match game Lee Sodol, the world champion Go. The win is important in light of the large number of moves available (more than 14.5% after only four moves!) as the game progresses. Subsequently, Google bought a rumored $400 million from DeepMind.

Artificial intelligence allows computers and devices to imitate the mental capacity of the human mind to perceive, understand, solve problems and take decisions.

The word artificial intelligence refers to a human-like intelligence demonstrated by a computer, robot, or other machines in computer science. Artificial intelligence refers in common use to a computer or a machines capacity to imitate the human minds capacity to learn from examples and experience, to recognize objects, to understand and respond to language, to make decisions, to resolve problems and to combine these and other capabilities to full function, for instance, to greet a guest at the hotel.

AI is now part of our daily lives after decades of being confined to science fiction. A huge volume of data and the subsequent growth, as well as the vast availability of computing technology, are making a boost in AI development feasible, which allows all this data to be processed more quickly and reliably than human users can. AI finishes by typing our terms, gives instructions when asked, vacuums our floors, and suggests what to buy or watch next. It also helps qualified practitioners perform quicker and with better results by pushing software, such as medical image processing.

As popular today as is artificial intelligence, it can be difficult to grasp the language for AI and AI since certain words are used interchangeably, and in some cases synonymous. What is the difference between machine learning and artificial intelligence? Between the learning process and profound learning? Between speech recognition and production of natural language? Zwischen AI weak and AI strong? This article will help you overcome these and other terms and grasp the concepts of AIs working.

Continue reading here:
You need to Know about the History of Artificial Intelligence - Technotification

Google’s AlphaGo computer beats human champ Lee Sedol in …

SEOUL, South Korea -- Game not over? Human Go champion Lee Sedol says Google's Go-playing program AlphaGo is not yet superior to humans, despite its 4:1 victory in a match that ended Tuesday.

The week-long showdown between the South Korean Go grandmaster and Google DeepMind's artificial intelligence program showed the computer software has mastered a major challenge for artificial intelligence.

"I don't necessarily think AlphaGo is superior to me. I believe that there is still more a human being could do to play against artificial intelligence," Lee said after the nearly five-hour-long final game.

AlphaGo had the upper hand in terms of its lack of vulnerability to emotion and fatigue, two crucial aspects in the intense brain game.

"When it comes to psychological factors and strong concentration power, humans cannot be a match," Lee said.

But he added, "I don't think my defeat this time is a loss for humanity. It clearly shows my weaknesses, but not the weakness of all humanity."

He expressed deep regret for the loss and thanked his fans for their support, saying he enjoyed all five matches.

Lee, 33, has made his living playing Go since he was 12 and is famous in South Korea even among people who do not play the game. The entire country was rooting for him to win.

The series was one of the most intensely watched events in the past week across Asia. The human-versus-machine battle hogged headlines, eclipsing reports of North Korean threats of a pre-emptive strike on the South.

The final game was too close to call until the very end. Experts said it was the best of the five games in that Lee was in top form and AlphaGo made few mistakes. Lee resigned about five hours into the game.

The final match was broadcast live on three major TV networks in South Korea and on big TV screens in downtown Seoul.

Google estimated that 60 million people in China, where Go is a popular pastime, watched the first match on Wednesday.

Before AlphaGo's victory, the ancient Chinese board game was seen as too complex for computers to master. Go fans across Asia were astonished when Lee, one of the world's best Go players, lost the first three matches.

Lee's win over AlphaGo in the fourth match, on Sunday, showed the machine was not infallible: Afterward, Lee said AlphaGo's handling of surprise moves was weak. The program also played less well with a black stone, which plays first and has to claim a larger territory than its opponent to win.

Choosing not to exploit that weakness, Lee opted for a black stone in the last match.

Go players take turns placing the black and white stones on 361 grid intersections on a nearly square board. Stones can be captured when they are surrounded by those of their opponent.

To take control of territory, players surround vacant areas with their stones. The game continues until both sides agree there are no more places to put stones, or until one side decides to quit.

Google officials say the company wants to apply technologies used in AlphaGo in other areas, such as smartphone assistants, and ultimately to help scientists solve real-world problems.

As for Go, other top players are bracing themselves.

Chinese world Go champion Ke Jie said it was just a matter of before top Go players like himself would be overtaken by artificial intelligence.

"It is very hard for Go players at my level to improve even a little bit, whereas AlphaGo has hundreds of computers to help it improve and can play hundreds of practice matches a day," Ke said.

"It does not seem like a good thing for we professional Go players, but the match played a very good role in promoting Go," Ke said.

Go here to see the original:
Google's AlphaGo computer beats human champ Lee Sedol in ...

The Pastry A.I. That Learned to Fight Cancer – The New Yorker

One morning in the spring of 2019, I entered a pastry shop in the Ueno train station, in Tokyo. The shop worked cafeteria-style. After taking a tray and tongs at the front, you browsed, plucking what you liked from heaps of baked goods. What first struck me was the selection, which seemed endless: there were croissants, turnovers, Danishes, pies, cakes, and open-faced sandwiches piled up everywhere, sometimes in dozens of varieties. But I was most surprised when I got to the register. At the urging of an attendant, I slid my items onto a glowing rectangle on the counter. A nearby screen displayed an image, shot from above, of my doughnuts and Danish. I watched as a set of jagged, neon-green squiggles appeared around each item, accompanied by its name in Japanese and a price. The system had apparently recognized my pastries by sight. It calculated what I owed, and I paid.

I tried to gather myself while the attendant wrapped and bagged my items. I was still stunned when I got outside. The bakery system had the flavor of magica feat seemingly beyond the possible, made to look inevitable. I had often imagined that, someday, Id be able to point my smartphone camera at a peculiar flower and have it identified, or at a chess board, to study the position. Eventually, the tech would get to the point where one could do such things routinely. Now it appeared that we were in this world already, and that the frontier was pastry.

Computers learned to see only recently. For decades, image recognition was one of the grand challenges in artificial intelligence. As I write this, I can look up at my shelves: they contain books, and a skein of yarn, and a tangled cable, all inside a cabinet whose glass enclosure is reflecting leaves in the trees outside my window. I cant help but parse this sceneabout a third of the neurons in my cerebral cortex are implicated in processing visual information. But, to a computer, its a mess of color and brightness and shadow. A computer has never untangled a cable, doesnt get that glass is reflective, doesnt know that trees sway in the wind. A.I. researchers used to think that, without some kind of model of how the world worked and all that was in it, a computer might never be able to distinguish the parts of complex scenes. The field of computer vision was a zoo of algorithms that made do in the meantime. The prospect of seeing like a human was a distant dream.

All this changed in 2012, when Alex Krizhevsky, a graduate student in computer science, released AlexNet, a program that approached image recognition using a technique called deep learning. AlexNet was a neural network, deep because its simulated neurons were arranged in many layers. As the network was shown new images, it guessed what was in them; inevitably, it was wrong, but after each guess it was made to adjust the connections between its layers of neurons, until it learned to output a label matching the one that researchers provided. (Eventually, the interior layers of such networks can come to resemble the human visual cortex: early layers detect simple features, like edges, while later layers perform more complex tasks, such as picking out shapes.) Deep learning had been around for years, but was thought impractical. AlexNet showed that the technique could be used to solve real-world problems, while still running quickly on cheap computers. Today, virtually every A.I. system youve heard ofSiri, AlphaGo, Google Translatedepends on the technique.

The drawback of deep learning is that it requires large amounts of specialized data. A deep-learning system for recognizing faces might have to be trained on tens of thousands of portraits, and it wont recognize a dress unless its also been shown thousands of dresses. Deep-learning researchers, therefore, have learned to collect and label data on an industrial scale. In recent years, weve all joined in the effort: todays facial recognition is particularly good because people tag themselves in pictures that they upload to social networks. Google asks users to label objects that its A.I.s are still learning to identify: thats what youre doing when you take those Are you a bot? tests, in which you select all the squares containing bridges, crosswalks, or streetlights. Even so, there are blind spots. Self-driving cars have been known to struggle with unusual signage, such as the blue stop signs found in Hawaii, or signs obscured by dirt or trees. In 2017, a group of computer scientists at the University of California, Berkeley, pointed out that, on the Internet, almost all the images tagged as bedrooms are clearly staged and depict a made bed from 2-3 meters away. As a result, networks have trouble recognizing real bedrooms.

Its possible to fill in these blind spots through focussed effort. A few years ago, I interviewed for a job at a company that was using deep learning to read X-rays, starting with bone fractures. The programmers asked surgeons and radiologists from some of the best hospitals in the U.S. to label a library of images. (The job I interviewed for wouldnt have involved the deep-learning system; instead, Id help improve the Microsoft Paint-like program that the doctors used for labelling.) In Tokyo, outside the bakery, I wondered whether the pastry recognizer could possibly be relying on a similar effort. But it was hard to imagine a team of bakers assiduously photographing and labelling each batch as it came out of the oven, tens of thousands of times, for all the varieties on offer. My partner suggested that the bakery might be working with templates, such that every pain au chocolat would have precisely the same shape. An alternative suggested by the machines retro graphicsbut perplexing, given the systems uncanny performancewas that it wasnt using deep learning. Maybe someone had gone down the old road of computer vision. Maybe, by really considering what pastry looked like, they had taught their software to see it.

Hisashi Kambe, the man behind the pastry A.I., grew up in Nishiwaki City, a small town that sits at Japans geographic center. The city calls itself Japans navel; surrounded by mountains and rice fields, its best known for airy, yarn-dyed cotton fabrics woven in intricate patterns, which have been made there since the eighteenth century. As a teen-ager, Kambe planned to take over his fathers lumber business, which supplied wood to homes built in the traditional style. But he went to college in Tokyo and, after graduating, in 1974, took a job in Osaka at Matsushita Electric Works, which later became Panasonic. There, he managed the companys relationship with I.B.M. Finding himself in over his head, he took computer classes at night and fell in love with the machines.

In his late twenties, Kambe came home to Nishiwaki, splitting his time between the lumber mill and a local job-training center, where he taught computer classes. Interest in computers was soaring, and he spent more and more time at the school; meanwhile, more houses in the area were being built in a Western style, and traditional carpentry was in decline. Kambe decided to forego the family business. Instead, in 1982, he started a small software company. In taking on projects, he followed his own curiosity. In 1983, he began working with NHK, one of Japans largest broadcasters. Kambe, his wife, and two other programmers developed a graphics system for displaying the score during baseball games and exchange rates on the nightly news. In 1984, Kambe took on a problem of special significance in Nishiwaki. Textiles were often woven on looms controlled by planning programs; the programs, written on printed cards, looked like sheet music. A small mistake on a planning card could produce fabric with a wildly incorrect pattern. So Kambe developed SUPER TEX-SIM, a program that allowed textile manufacturers to simulate the design process, with interactive yarn and color editors. It sold poorly until 1985, a series of breaks led to a distribution deal with Mitsubishis fabric division. Kambe formally incorporated as BRAIN Co., Ltd.

For twenty years, BRAIN took on projects that revolved, in various ways, around seeing. The company made a system for rendering kanji characters on personal computers, a tool that helped engineers design bridges, systems for onscreen graphics, and more textile simulators. Then, in 2007, BRAIN was approached by a restaurant chain that had decided to spin off a line of bakeries. Bread had always been an import in Japanthe Japanese word for it, pan, comes from Portugueseand the countrys rich history of trade had left consumers with ecumenical tastes. Unlike French boulangeries, which might stake their reputations on a handful of staples, its bakeries emphasized range. (In Japan, even Kit Kats come in more than three hundred flavors, including yogurt sake and cheesecake.) New kinds of baked goods were being invented all the time: the carbonara, for instance, takes the Italian pasta dish and turns it into a kind of breakfast sandwich, with a piece of bacon, slathered in egg, cheese, and pepper, baked open-faced atop a roll; the ham corn pulls a similar trick, but uses a mixture of corn and mayo for its topping. Every kind of baked good was an opportunity for innovation.

Analysts at the new bakery venture conducted market research. They found that a bakery sold more the more varieties it offered; a bakery offering a hundred items sold almost twice as much as one selling thirty. They also discovered that naked pastries, sitting in open baskets, sold three times as well as pastries that were individually wrapped, because they appeared fresher. These two facts conspired to create a crisis: with hundreds of pastry types, but no wrappersand, therefore, no bar codesnew cashiers had to spend months memorizing what each variety looked like, and its price. The checkout process was difficult and error-pronethe cashier would fumble at the register, handling each item individuallyand also unsanitary and slow. Lines in pastry shops grew longer and longer. The restaurant chain turned to BRAIN for help. Could they automate the checkout process?

AlexNet was five years in the future; even if Kambe and his team could have photographed thousands of pastries, they couldnt have pulled a neural network off the shelf. Instead, the state of the art in computer vision involved piecing together a pipeline of algorithms, each charged with a specific task. Suppose that you wanted to build a pedestrian-recognition system. Youd start with an algorithm that massaged the brightness and colors in your image, so that you werent stymied by someones red shirt. Next, you might add algorithms that identified regions of interest, perhaps by noticing the zebra pattern of a crosswalk. Only then could you begin analyzing image featurespatterns of gradients and contrasts that could help you pick out the distinctive curve of someones shoulders, or the A made by a torso and legs. At each stage, you could choose from dozens if not hundreds of algorithms, and ways of combining them.

For the BRAIN team, progress was hard-won. They started by trying to get the cleanest picture possible. A document outlining the companys early R. & D. efforts contains a triptych of pastries: a carbonara sandwich, a ham corn, and a minced potato. This trio of lookalikes was one of the systems early nemeses: As you see, the text below the photograph reads, the bread is basically brown and round. The engineers confronted two categories of problem. The first they called similarity among different kinds: a bacon pain dpi, for instancea sort of braided baguette with bacon insidehas a complicated knotted structure that makes it easy to mistake for sweet-potato bread. The second was difference among same kinds: even a croissant came in many shapes and sizes, depending on how you baked it; a cream doughnut didnt look the same once its powdered sugar had melted.

In 2008, the financial crisis dried up BRAINs other business. Kambe was alarmed to realize that he had bet his company, which was having to make layoffs, on the pastry project. The situation lent the team a kind of maniacal focus. The company developed ten BakeryScan prototypes in two years, with new image preprocessors and classifiers. They tried out different cameras and light bulbs. By combining and rewriting numberless algorithms, they managed to build a system with ninety-eight per cent accuracy across fifty varieties of bread. (At the office, they were nothing if not well fed.) But this was all under carefully controlled conditions. In a real bakery, the lighting changes constantly, and BRAINs software had to work no matter the season or the time of day. Items would often be placed on the device haphazardly: two pastries that touched looked like one big pastry. A subsystem was developed to handle this scenario. Another subsystem, called Magnet, was made to address the opposite problem of a pastry that had been accidentally ripped apart.

Read more from the original source:
The Pastry A.I. That Learned to Fight Cancer - The New Yorker

Will Artificial Intel get along with us? Only if we design it that way | TheHill – The Hill

Artificial Intelligence (AI) systems that interact with us the way we interact with each other have long typified Hollywoods image, whether you think of HAL in 2001: A Space Odyssey, Samantha in Her, or Ava in Ex Machina. It thus might surprise people that making systems that interact, assist or collaborate with humans has never been high on the technical agenda.

From its beginning, AI has had a rather ambivalent relationship with humans. The biggest AI successes have come either at a distance from humans (think of the Spirit and Opportunity rovers navigating the Martian landscape) or in cold adversarial faceoffs (the Deep Blue defeating world chess champion Gary Kasparov, or AlphaGo besting Lee Sedol). In contrast to the magnetic pull of these replace/defeat humans ventures, the goal of designing AI systems that are human-aware, capable of interacting and collaborating with humans and engendering trust in them, has received much less attention.

More recently, as AI technologies started capturing our imaginations, there has been a conspicuous change with human becoming the desirable adjective for AI systems. There are so many variations human-centered, human-compatible, human-aware AI, etc. that there is almost a need for a dictionary of terms. Some of this interest arose naturally from a desire to understand and regulate the impacts of AI technologies on people. In previous columns, I've looked, for example, at bias in AI systems and the impact of AI-generated synthetic reality, such as deep fakes or "mind twins."

This time, let us focus on the challenges and impacts of AI systems that continually interact with humans as decision support systems, personal assistants, intelligent tutoring systems, robot helpers, social robots, AI conversational companions, etc.

To be aware of humans, and to interact with them fluently, an AI agent needs to exhibit social intelligence. Designing agents with social intelligence received little attention when AI development was focused on autonomy rather than coexistence. Its importance for humans cannot be overstated, however. After all, evolutionary theory shows that we developed our impressive brains not so much to run away from lions on the savanna but to get along with each other.

A cornerstone of social intelligence is the so-called theory of mind the ability to model mental states of humans we interact with. Developmental psychologists have shown (with compelling experiments like the Sally-Anne test) that children, with the possible exception of those on the autism spectrum, develop this ability quite early.

Successful AI agents need to acquire, maintain and use such mental models to modulate their own actions. At a minimum, AI agents need approximations of humans task and goal models, as well as the humans model of the AI agents task and goal models. The former will guide the agent to anticipate and manage the needs, desires and attention of humans in the loop (think of the prescient abilities of the character Radar on the TV series M*A*S*H*), and the latter allow it to act in ways that are interpretable to humans by conforming to their mental models of it and be ready to provide customized explanations when needed.

With the increasing use of AI-based decision support systems in many high-stakes areas, including health and criminal justice, the need for AI systems exhibiting interpretable or explainable behavior to humans has become quite critical. The European Unions General Data Protection Regulation posits a right to contestable explanations for all machine decisions that affect humans (e.g., automated approval or denial of loan applications). While the simplest form of such explanations could well be a trace of the reasoning steps that lead to the decision, things get complex quickly once we recognize that an explanation is not a soliloquy and that the comprehensibility of an explanation depends crucially on the mental states of the receiver. After all, your physician gives one kind of explanation for her diagnosis to you and another, perhaps more technical one, to her colleagues.

Provision of explanations thus requires a shared vocabulary between AI systems and humans, and the ability to customize the explanation to the mental models of humans. This task becomes particularly challenging since many modern data-based decision-making systems develop their own internal representations that may not be directly translatable to human vocabulary. Some emerging methods for facilitating comprehensible explanations include explicitly having the machine learn to translate explanations based on its internal representations to an agreed-upon vocabulary.

AI systems interacting with humans will need to understand and leverage insights from human factors and psychology. Not doing so could lead to egregious miscalculations. Initial versions of Teslas auto-pilot self-driving assistant, for example, seemed to have been designed with the unrealistic expectation that human drivers can come back to full alertness and manually override when the self-driving system runs into unforeseen modes, leading to catastrophic failures. Similarly,the systems will need to provide an appropriate emotional response when interacting with humans (even though there is no evidence, as yet, that emotions improve an AI agents solitary performance). Multiple studies show that people do better at a task when computer interfaces show appropriate affect. Some have even hypothesized that part of the reason for the failure of Clippy, the old Microsoft Office assistant, was because it had a permanent smug smile when it appeared to help flustered users.

AI systems with social intelligence capabilities also produce their own set of ethical quandaries. After all, trust can be weaponized in far more insidious ways than a rampaging robot. The potential for manipulation is further amplified by our own very human tendency to anthropomorphize anything that shows even remotely human-like behavior. Joe Weizenbaum had to shut down Eliza, historys first chatbot, when he found his staff pouring their hearts out to it; and scholars like Sherry Turkle continue to worry about the artificial intimacy such artifacts might engender. Ability to manipulate mental models can also allow AI agents to engage in lying or deception with humans, leading to a form of head fakes that will make todays deep fakes tame by comparison. While a certain level of white lies are seen as the glue for human social fabric, it is not clear whether we want AI agents to engage in them.

As AI systems increasingly become human-aware, even quotidian tools surrounding us will start gaining mental-modeling capabilities. This adaptivity can be both a boon and a bane. While we talked about the harms of our tendency to anthropomorphize AI artifacts that are not human-aware, equally insidious are the harms that can arise when we fail to recognize that what we see as a simple tool is actually mental-modeling us. Indeed, micro-targeting by social media can be understood as a weaponized version of such manipulation; people would be much more guarded with social media platforms if they realized that those platforms are actively profiling them.

Given the potential for misuse, we should aim to design AI systems that must understand human values, mental models and emotions, and yet not exploit them with intent to cause harm. In other words, they must be designed with an overarching goal of beneficence to us.

All this requires a meaningful collaboration between AI and humanities including sociology, anthropology and behavioral psychology. Such interdisciplinary collaborations were the norm rather than the exception at the beginning of the AI field and are coming back into vogue.

Formidable as this endeavor might be, it is worth pursuing. We should be proactively building a future where AI agents work along with us, rather than passively fretting about a dystopian one where they are indifferent or adversarial. By designing AI agents to be human-aware from the ground up, we can increase the chances of a future where such agents both collaborate and get along with us.

Subbarao Kambhampati, PhD, is a professor of computer science at Arizona State University and the Chief AI Officer for AI Foundation, which develops realistic AI companions with social skills. He was the president of the Association for the Advancement of Artificial Intelligence, a founding board member of Partnership on AI, and is an Innovators Network Foundation Privacy Fellow. He can be followed on Twitter @rao2z.

Read more here:
Will Artificial Intel get along with us? Only if we design it that way | TheHill - The Hill