Archive for the ‘Artificial Intelligence’ Category

Why Physics has Relevancy To Artificial Intelligence And Building AI Leadership Brain Trust? – Forbes

This blog is a continuation of theBuilding AI Leadership Brain Trust Blog Serieswhich targets board directors and CEOs toaccelerate their duty of care to develop stronger skills and competencies in AI in order to ensure their AI programs achieve sustaining results.

My last two blogs introduced value of science and stressed its importance to AI, and focused on the importance of AI professionals having some foundation in computing science as a cornerstone for designing and developing AI models and production processes, as well as the richness of complexity sciences and appreciating that integrating diverse disciplines into complex AI programs is key for successful returns on investments (ROI).

This blog introduces the importance of physics and explores its relationship to AI as often I see AI solutioning teams missing physics skills in the solutioning constructs - which I believe is a strategic mistake for many complex AI programs. Its important for C levels to understand that AI is not a singular discipline it requires many other skills to get the solution architecture right. So deeply understand the business problem in front of you - and the more complex the problem is the increased value physicists will have in guiding you forward.

Atomic molecule on blackboard

In the Brain Trust Series, I have identified over 50 skills required to help evolve talent in organizations committed to advancing AI literacy. The last few blogs have been discussing the technical skills relevancy. To see the full AI Brain Trust Framework introduced in thefirst blog, reference here.

We are currently focused on the technical skills in the AI Brain Trust Framework

Technical Skills:

1.Research Methods Literacy

2.Agile Methods Literacy

3.User Centered Design Literacy

4.Data Analytics Literacy

5.Digital Literacy (Cloud, SaaS, Computers, etc.)

6.Mathematics Literacy

7.Statistics Literacy

8.Sciences (Computing Science, Complexity Science, Physics) Literacy

9.Artificial Intelligence (AI) and Machine Learning (ML) Literacy

10.Sustainability Literacy

What is the relevance of physics to AI as a discipline?

There are so many aspects of physics that can be applied to AI hence, it does not take one long to appreciate the value of this science discipline. One of the most significant discoveries in physics was the Higgs Boson Particle, often referred to as the God Particle which was discovered using an AI neural network to help identify complex patterns in particle collisions.

The last blog stressed the importance of complexity science and the most important aspect of physics is that this discipline teaches you about how to understand and decompose complex processes.

In prior blogs, I stressed that the importance of building an AI model requires three main enablements: 1) collecting and analyzing the data 2.) developing the AI model and 3.) evaluating the model outcomes and determining value. Each of these areas has relevance to physics and a strong AI expert will appreciate the value that physics know-how can bring to enable engineering teams to tackle the most complex problems in the world.

Lets start first with data analysis. There are many forms of machine learning approaches, but the one that has the closest linkages to physics is neural networks which are trained to identify complex patterns, as well as find new patterns. Examples of how AI can be applied to solve a physics problem would be to classify thousands of images and be able to identify black holes, being able to detect subtle changes in light around objects is an example of the disciplines coming together.

Physics professionals also use terms like gravitational lensing for image analysis using neural networks to tease out the classifications to finer levels of details, while AI specialists simply say image processing. What is always a challenge in diverse disciplines is geek speak often confusing business leaders who cannot decipher the language meanings.

In addition, many acclaimed physicists purport that they are the major contributors to advancing the AI field, so rivalry friction exists in these disciplines as well, and pardon the pun.

Neural networks are particularly good at enabling AI models to be able to detect changes in radio waves or even earths gravitational waves, or to determine when specific rays may the hit earths atmosphere and provide timing insights as well.

Being able to encode different particle behavior and observe their subtle changes over time provides a rich bed of AI modelling analysis and interpretability for physicists to have deeper mathematical calculation insights to encode their observations more accurately.

Other physics terms that underlie neural networks include: compressibility or conductivity. What is even more exciting in bringing these two disciplines together is the area of quantum tomography, which equates to measuring the changes in a quantum state which has innovation relevance to quantum computing. Tomographyis an exciting field which analyzes images by sections or sectioning through the use of any kind of penetrating wave. This method is used in diverse areas including: radiology, atmospheric sciences, geophysics, oceanography, plasma physics, astrophysics, quantum information, and other science areas. Its applications are endless and very exciting.

Machine learning methods help to advance physics, as well as physics has value and relevance to machine learning. The high computational value of machine learning is allowing physicists to tackle even more complex problems, like in simulating global climatic change leveraging geometric surfaces and applying deep learning onto curved surfaces.

An Imperial College computer scientist, Michael Bronstein and his researchers, helped to advance geo-metric deep learning methods and determined that going beyond the Euclidean plane would require them to reimagine one of the basic computational procedures that made neural networks so effective at 2D image recognition in the first place. This procedure lets a layer of the neural network perform a mathematical operation on small patches of the input data and then pass the results to the next layer in the network.

Without going into too many details these researchers re-imagined these approaches and recognized that a 3D shape bent into two different poses like a bear standing up or a bear sitting down were all instances of the same objects vs two distinct objects.

Hence the term Convolutional Neural Networks (CNN) was born. This type of network specializes in processing data in a grid like topology, such as an image, and each neuron works in its own receptive reference field and is connected to other neurons in a way that they cover the entire visual field, so after analyzing thousands of images of a cat or a dog this problem is not as difficult as there is easy access to this data set.

CNNs can detect rotated or reflected features in flat images without having to train on specific examples of the features and spherical CNNs can create feature maps from data on the surface of a sphere without distorting them as flat projections. The applications are endless and very exciting to physicists where object surface detection is key in their research methods.

Unlike finding cancerous tumors from diverse lung photos, finding medically accurate, quality labelling validated is a more difficult challenge to achieve.

In a government and academic research project they used a convolutional network (CNN) to detect cyclones in the data using newer gauge CNN detection method which was able to detect cyclones at close to 98% accuracy. A gauge CNN would theoretically work on any curved surface of any dimensionalityThe implications for climate monitoring using physics and AI techniques is unprecedented with these advancements.

Summary

In summary, both physics and machine learning have some similarity. Both disciplines are focused on making accurate observations and both build models to predict future observations. One of the terms that often physicists use is co-variance which means that physics should be independent of which kind or rule is used or what kind of observers are involved which nets out to simply stresses independent thinking.

Einstein stated this best in 1916 when he said: The general laws of nature are to be expressed by equations which hold good for all systems of coordinates.

Analyzing diverse patterns

What key questions can Board Directors and CEOs ask to evaluate their depth of physics linkages to artificial intelligence relevance?

1.) How many resources do you have that have an undergraduate degree in physics versus a masters degree or a doctoral degree?

2.) Of these total resources trained in physics disciplines, how many also have a specialization in Artificial Intelligence?

3.) How many of your most significant AI projects have expertise in physics to ensure increased inter-disciplinary knowledge know-how?

4.) How many of the Board Directors or C-Suite have expertise in physics with a knowledge blend of AI to tackle the worlds most complex business problems?

These are some starting questions above to help guide leaders to understand their talent mix in appreciating the value of diverse science disciplines to augment the AI solution delivery teams in enterprises.

I believe that board directors and CEOs need to understand their talent depth in science disciplines in addition to AI disciplines to ensure that their complex AI programs are optimized more for success. The last three blogs, including this one looked at three disciplines 1) Computing Science 2.) Complexity Science and this one on Physics - all written to reinforce the important that science disciplines are key to ensuring AI investments are successful, and continued investments are made to help them evolve and achieve the value to support humans in augmenting their decision making, or improving their operating processes.

The next blog in this AI Brain Trust series will discuss a general foundation of the key AI terms and capabilities to provide more knowledge to advance the C-Suite to get AI right and achieve more sustaining success.

More Information:

To see the full AI Brain Trust Framework introduced in thefirst blog, reference here.

To learn more about Artificial Intelligence, and the challenges, both positive and negative, refer to my new book, The AI Dilemma, to guide leaders foreward.

Note:

If you have any ideas, please do advise as I welcome your thoughts and perspectives.

Read more:
Why Physics has Relevancy To Artificial Intelligence And Building AI Leadership Brain Trust? - Forbes

Artificial Intelligence Is Misreading Human Emotion – The Atlantic

At a remote outpost in the mountainous highlands of Papua New Guinea, a young American psychologist named Paul Ekman arrived with a collection of flash cards and a new theory. It was 1967, and Ekman had heard that the Fore people of Okapa were so isolated from the wider world that they would be his ideal test subjects.

Like Western researchers before him, Ekman had come to Papua New Guinea to extract data from the indigenous community. He was gathering evidence to bolster a controversial hypothesis: that all humans exhibit a small number of universal emotions, or affects, that are innate and the same all over the world. For more than half a century, this claim has remained contentious, disputed among psychologists, anthropologists, and technologists. Nonetheless, it became a seed for a growing market that will be worth an estimated $56 billion by 2024. This is the story of how affect recognition came to be part of the artificial-intelligence industry, and the problems that presents.

When Ekman arrived in the tropics of Okapa, he ran experiments to assess how the Fore recognized emotions. Because the Fore had minimal contact with Westerners and mass media, Ekman had theorized that their recognition and display of core expressions would prove that such expressions were universal. His method was simple. He would show them flash cards of facial expressions and see if they described the emotion as he did. In Ekmans own words, All I was doing was showing funny pictures. But Ekman had no training in Fore history, language, culture, or politics. His attempts to conduct his flash-card experiments using translators floundered; he and his subjects were exhausted by the process, which he described as like pulling teeth. Ekman left Papua New Guinea, frustrated by his first attempt at cross-cultural research on emotional expression. But this would be just the beginning.

Today affect-recognition tools can be found in national-security systems and at airports, in education and hiring start-ups, in software that purports to detect psychiatric illness and policing programs that claim to predict violence. The claim that a persons interior state can be accurately assessed by analyzing that persons face is premised on shaky evidence. A 2019 systematic review of the scientific literature on inferring emotions from facial movements, led by the psychologist and neuroscientist Lisa Feldman Barrett, found there is no reliable evidence that you can accurately predict someones emotional state in this manner. It is not possible to confidently infer happiness from a smile, anger from a scowl, or sadness from a frown, as much of current technology tries to do when applying what are mistakenly believed to be the scientific facts, the study concludes. So why has the idea that there is a small set of universal emotions, readily interpreted from a persons face, become so accepted in the AI field?

To understand that requires tracing the complex history and incentives behind how these ideas developed, long before AI emotion-detection tools were built into the infrastructure of everyday life.

The idea of automated affect recognition is as compelling as it is lucrative. Technology companies have captured immense volumes of surface-level imagery of human expressionsincluding billions of Instagram selfies, Pinterest portraits, TikTok videos, and Flickr photos. Much like facial recognition, affect recognition has become part of the core infrastructure of many platforms, from the biggest tech companies to small start-ups.

Whereas facial recognition attempts to identify a particular individual, affect recognition aims to detect and classify emotions by analyzing any face. These systems already influence how people behave and how social institutions operate, despite a lack of substantial scientific evidence that they work. Automated affect-detection systems are now widely deployed, particularly in hiring. The AI hiring company HireVue, which can list Goldman Sachs, Intel, and Unilever among its clients, uses machine learning to infer peoples suitability for a job. In 2014, the company launched its AI system to extract microexpressions, tone of voice, and other variables from video job interviews, which it used to compare job applicants against a companys top performers. After considerable criticism from scholars and civil-rights groups, it dropped facial analysis in 2021, but kept vocal tone as an assessment criterion. In January 2016, Apple acquired the start-up Emotient, which claimed to have produced software capable of detecting emotions from images of faces. Perhaps the largest of these start-ups is Affectiva, a company based in Boston that emerged from academic work done at MIT.

Affectiva has coded a variety of emotion-related applications, primarily using deep-learning techniques. These approaches include detecting distracted and risky drivers on roads and measuring consumers emotional responses to advertising. The company has built what it calls the worlds largest emotion database, made up of more than 10 million peoples expressions from 87 countries. Its monumental collection of videos was hand-labeled by crowdworkers based primarily in Cairo.

Outside the start-up sector, AI giants such as Amazon, Microsoft, and IBM have all designed systems for emotion detection. Microsoft offers perceived emotion detection in its Face API, identifying anger, contempt, disgust, fear, happiness, neutral, sadness, and surprise, while Amazons Rekognition tool similarly proclaims that it can identify what it characterizes as all seven emotions and measure how these things change over time, such as constructing a timeline of the emotions of an actor.

Emotion-recognition systems share a similar set of blueprints and founding assumptions: that there is a small number of distinct and universal emotional categories, that we involuntarily reveal these emotions on our faces, and that they can be detected by machines. These articles of faith are so accepted in some fields that it can seem strange even to notice them, let alone question them. But if we look at how emotions came to be taxonomizedneatly ordered and labeledwe see that questions lie in wait at every corner.

Ekmans research began with a fortunate encounter with Silvan Tomkins, then an established psychologist at Princeton who had published the first volume of his magnum opus, Affect Imagery Consciousness, in 1962. Tomkinss work on affect had a huge influence on Ekman, who devoted much of his career to studying its implications. One aspect in particular played an outsize role: the idea that if affects are an innate set of evolutionary responses, they would be universal and thus recognizable across cultures. This desire for universality has an important bearing on why this theory is widely applied in AI emotion-recognition systems today. The theory could be applied everywhere, a simplification of complexity that was easily replicable at scale.

In the introduction to Affect Imagery Consciousness, Tomkins framed his theory of biologically based universal affects as one addressing an acute crisis of human sovereignty. He was challenging the development of behaviorism and psychoanalysis, two schools of thought that he believed treated consciousness as a mere by-product that was in service to other forces. He noted that human consciousness had been challenged and reduced again and again, first by Copernicuswho displaced man from the center of the universethen by Darwinwhose theory of evolution shattered the idea that humans were created in the image of a Christian Godand most of all by Freudwho decentered human consciousness and reason as the driving forces behind our motivations. Tomkins continued, The paradox of maximal control over nature and minimal control over human nature is in part a derivative of the neglect of the role of consciousness as a control mechanism. To put it simply, consciousness tells us little about why we feel and act the way we do. This is a crucial claim for all sorts of later applications of affect theory, which stress the inability of humans to recognize both the feeling and the expression of affects. If we as humans are incapable of truly detecting what we are feeling, then perhaps AI systems can do it for us?

Tomkinss theory of affects was his way to address the problem of human motivation. He argued that motivation was governed by two systems: affects and drives. Tomkins proposed that drives tend to be closely associated with immediate biological needs, such as hunger and thirst. They are instrumental; the pain of hunger can be remedied with food. But the primary system governing human motivation and behavior is that of affects, involving positive and negative feelings. Affects, which play the most important role in human motivation, amplify drive signals, but they are much more complex. For example, it is difficult to know the precise causes that lead a baby to cry, expressing the distress-anguish affect.

How can we know anything about a system in which the connections between cause and effect, stimulus and response, are so tenuous and uncertain? Tomkins proposed an answer: The primary affects . . . seem to be innately related in a one-to-one fashion with an organ system which is extraordinarily visiblenamely, the face. He found precedents for this emphasis on facial expression in two works published in the 19th century: Charles Darwins The Expression of the Emotions in Man and Animals, from 1872, and an obscure volume by the French neurologist Guillaume-Benjamin-Amand Duchenne de Boulogne from 1862.

Tomkins assumed that the facial display of affects was a universal human trait. Affects, Tomkins believed, are sets of muscle, vascular, and glandular responses located in the face and also widely distributed through the body, which generate sensory feedback . . . These organized sets of responses are triggered at subcortical centers where specific programs for each distinct affect are storeda very early use of a computational metaphor for a human system. But Tomkins acknowledged that the interpretation of affective displays depends on individual, social, and cultural factors. He admitted that there were very different dialects of facial language in different societies. Even the forefather of affect research raised the possibility that interpreting facial displays depends on social and cultural context.

Given that facial expressions are culturally variable, using them to train machine-learning systems would inevitably mix together all sorts of different contexts, signals, and expectations. The problem for Ekman, and later for the field of computer vision, was how to reconcile these tensions.

During the mid-1960s, opportunity knocked at Ekmans door in the form of a large grant from what is now called the Defense Advanced Research Projects Agency (DARPA), a research arm of the Department of Defense. DARPAs sizable financial support allowed Ekman to begin his first studies to prove universality in facial expression. In general, these studies followed a design that would be copied in early AI labs. He largely duplicated Tomkinss methods, even using Tomkinss photographs to test subjects from Chile, Argentina, Brazil, the United States, and Japan. Subjects were presented with photographs of posed facial expressions, selected by the designers as exemplifying or expressing a particularly pure affect, such as fear, surprise, anger, happiness, sadness, and disgust. Subjects were then asked to choose among these affect categories and label the posed image. The analysis measured the degree to which the labels chosen by subjects correlated with those chosen by the designers.

From the start, the methodology had problems. Ekmans forced-choice response format would be later criticized for alerting subjects to the connections that designers had already made between facial expressions and emotions. Further, the fact that these emotions were faked would raise questions about the validity of the results.

The idea that interior states can be reliably inferred from external signs has a long history. It stems in part from the history of physiognomy, which was premised on studying a persons facial features for indications of his character. Aristotle believed that it is possible to judge mens character from their physical appearance . . . for it has been assumed that body and soul are affected together. The Greeks also used physiognomy as an early form of racial classification, applied to the genus man itself, dividing him into races, in so far as they differ in appearance and in character (for instance Egyptians, Thracians, and Scythians).

Physiognomy in Western culture reached a high point during the 18th and 19th centuries, when it was seen as part of the anatomical sciences. A key figure in this tradition was the Swiss pastor Johann Kaspar Lavater, who wrote Essays on Physiognomy: For the Promotion of Knowledge and the Love of Mankind, originally published in German in 1789. Lavater took the approaches of physiognomy and blended them with the latest scientific knowledge. He believed that bone structure was an underlying connection between physical appearance and character type. If facial expressions were fleeting, skulls seemed to offer a more solid material for physiognomic inferences. Skull measurement was a popular technique in race science, and was used to support nationalism, white supremacy, and xenophobia. This work was infamously elaborated on throughout the 19th century by phrenologists such as Franz Joseph Gall and Johann Gaspar Spurzheim, as well as in scientific criminology through the work of Cesare Lombroso.

But it was the French neurologist Duchenne, described by Ekman as a marvelously gifted observer, who codified the use of photography and other technical means in the study of human faces. In Mcanisme de la physionomie humaine, Duchenne laid important foundations for both Darwin and Ekman, connecting older ideas from physiognomy and phrenology with more modern investigations into physiology and psychology. He replaced vague assertions about character with a more limited investigation into expression and interior mental and emotional states.

Duchenne worked in Paris at the Salptrire asylum, which housed up to 5,000 people with a wide range of mental illnesses and neurological conditions. Some would become his subjects for distressing experiments, part of the long tradition of medical and technological experimentation on the most vulnerable, those who cannot refuse. Duchenne, who was little known in the scientific community, decided to develop techniques of electrical shocks to stimulate isolated muscle movements in peoples faces. His aim was to build a more complete anatomical and physiological understanding of the face. Duchenne used these methods to bridge the new psychological science and the much older study of physiognomic signs, or passions. He relied on the latest photographic advancements, such as collodion processing, which allowed for much shorter exposure times, enabling Duchenne to freeze fleeting muscular movements and facial expressions in images.

Even at these early stages, the faces were never natural or socially occurring human expressions but simulations produced by the brute application of electricity to the muscles. Regardless, Duchenne believed that the use of photography and other technical systems would transform the squishy business of representation into something objective and evidentiary, more suitable for scientific study. Darwin praised Duchennes magnificent photographs and included reproductions in his own work.

Ekman would follow Duchenne in placing photography at the center of his experimental practice. He believed that slow-motion photography was essential to his approach, because many facial expressions operate at the limits of human perception. The aim was to find so-called microexpressionstiny muscle movements in the face.

One of Ekmans ambitious plans in his early research was to codify a system for detecting and analyzing facial expressions. In 1971, he co-published a description of what he called the Facial Affect Scoring Technique (FAST).

Relying on posed photographs, the approach used six basic emotional types largely derived from Ekmans intuitions. But FAST soon ran into problems when other scientists encountered facial expressions not included in its typology. So Ekman decided to ground his next measurement tool in facial musculature, harkening back to Duchennes original electroshock studies. Ekman identified roughly 40 distinct muscular contractions on the face and called the basic components of each facial expression an action unit. After some testing and validation, Ekman and Wallace Friesen published the Facial Action Coding System (FACS) in 1978; updated editions continue to be widely used.

Despite its financial success, FACS was very labor-intensive to use. Ekman wrote that it took 75 to 100 hours to train users in the FACS methodology, and an hour to score a single minute of facial footage. This challenge presented exactly the type of opportunity that the emerging field of computer vision was hungry to take on.

As work into the use of computers in affect recognition began to take shape, researchers recognized the need for a collection of standardized images to experiment with. A 1992 National Science Foundation report co-written by Ekman recommended that a readily accessible, multimedia database shared by the diverse facial research community would be an important resource for the resolution and extension of issues concerning facial understanding. Within a year, the Department of Defense began funding a program to collect facial photographs. By the end of the decade, machine-learning researchers had started to assemble, label, and make public the data sets that drive much of todays machine-learning research. Academic labs and companies worked on parallel projects, creating scores of photo databases. For example, researchers in a lab in Sweden created Karolinska Directed Emotional Faces. This database comprises images of individuals portraying posed emotional expressions corresponding to Ekmans categories. Theyve made their faces into the shapes that accord with six basic emotional states: joy, anger, disgust, sadness, surprise, and fear. When looking at these training sets, it is difficult to not be struck by a sense of pantomime: Incredible surprise! Abundant joy! Paralyzing fear! These subjects are literally making machine-readable emotion.

As the field grew in scale and complexity, so did the types of photographs used in affect recognition. Researchers began using the FACS system to label data generated not from posed expressions but rather from spontaneous facial expressions, sometimes gathered outside of laboratory conditions. Ekmans work had a profound and wide-ranging influence. The New York Times described Ekman as the worlds most famous face reader, and Time named him one of the 100 most influential people in the world. He would eventually consult with clients as disparate as the Dalai Lama, the FBI, the CIA, the Secret Service, and the animation studio Pixar, which wanted to create more lifelike renderings of cartoon faces. His ideas became part of popular culture, included in best sellers such as Malcolm Gladwells Blink and a television drama, Lie to Me, on which Ekman was a consultant for the lead characters role, apparently loosely based on him.

His business prospered: Ekman sold techniques of deception detection to agencies such as the Transportation Security Administration, which used them to develop the Screening of Passengers by Observation Techniques (SPOT) program. SPOT has been used to monitor air travelers facial expressions since the September 11 attacks, in an attempt to automatically detect terrorists. The system uses a set of 94 criteria, all of which are allegedly signs of stress, fear, or deception. But looking for these responses means that some groups are immediately disadvantaged. Anyone who is stressed, is uncomfortable under questioning, or has had negative experiences with police and border guards can score higher. This creates its own forms of racial profiling. The SPOT program has been criticized by the Government Accountability Office and civil-liberties groups for its racial bias and lack of scientific methodology. Despite its $900 million price tag, there is no evidence that it has produced clear successes.

As Ekmans fame spread, so did the skepticism of his work, with critiques emerging from a number of fields. An early critic was the cultural anthropologist Margaret Mead, who debated Ekman on the question of the universality of emotions in the late 1960s. Mead was unconvinced by Ekmans belief in universal, biological determinants of behavior that exist separately from highly conditioned cultural factors.

Scientists from different fields joined the chorus over the decades. In more recent years, the psychologists James Russell and Jos-Miguel Fernndez-Dols have shown that the most basic aspects of the science remain uncertain. Perhaps the foremost critic of Ekmans theory is the historian of science Ruth Leys, who sees a fundamental circularity in Ekmans method. The posed or simulated photographs he used were assumed to express a set of basic affective states that were, Leys wrote, already free of cultural influence. These photographs were then used to elicit labels from different populations to demonstrate the universality of facial expressions. The psychologist and neuroscientist Lisa Feldman Barrett puts it bluntly: Companies can say whatever they want, but the data are clear. They can detect a scowl, but thats not the same thing as detecting anger.

More troubling still is that in the field of the study of emotions, researchers have not reached consensus about what an emotion actually is. What emotions are, how they are formulated within us and expressed, what their physiological or neurobiological functions could be, their relation to stimuliall of this remains stubbornly unsettled. Why, with so many critiques, has the approach of reading emotions from a persons face endured? Since the 1960s, driven by significant Department of Defense funding, multiple systems have been developed that are more and more accurate at measuring facial movements. Ekmans theory seemed ideal for computer vision because it could be automated at scale. The theory fit what the tools could do.

Powerful institutional and corporate investments have been made based on perceived validity of Ekmans theories and methodologies. Recognizing that emotions are not easily classified, or that theyre not reliably detectable from facial expressions, could undermine an expanding industry. Many machine-learning papers cite Ekman as though these issues are resolved, before directly proceeding into engineering challenges. The more complex issues of context, conditioning, relationality, and culture are often ignored. Ekman himself has said he is concerned about how his ideas are being commercialized, but when hes written to tech companies asking for evidence that their emotion-recognition programs work, he has received no reply.

Instead of trying to build more systems that group expressions into machine-readable categories, we should question the origins of those categories themselves, as well as their social and political consequences. For example, these systems are known to flag the speech affects of women, particularly Black women, differently from those of men. A study conducted at the University of Maryland has shown that some facial recognition software interprets Black faces as having more negative emotions than white faces, specifically registering them as angrier and more contemptuous, even when controlling for their degree of smiling.

This is the danger of automating emotion recognition. These tools can take us back to the phrenological past, when spurious claims were used to support existing systems of power. The decades of scientific controversy around inferring emotional states consistently from a persons face underscores a central point: One-size-fits-all detection is not the right approach. Emotions are complicated, and they develop and change in relation to our cultures and historiesall the manifold contexts that live outside the AI frame.

But already, job applicants are judged unfairly because their facial expressions or vocal tones dont match those of other employees. Students are flagged at school because their faces appear angry, and customers are questioned because their facial cues indicate they may be shoplifters. These are the people who will bear the costs of systems that are not just technically imperfect, but based on questionable methodologies. A narrow taxonomy of emotionsgrown from Ekmans initial experimentsis being coded into machine-learning systems as a proxy for the infinite complexity of emotional experience in the world.

This article is adapted from Kate Crawfords recent book, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence.

Read the original:
Artificial Intelligence Is Misreading Human Emotion - The Atlantic

This Researcher Says AI Is Neither Artificial nor Intelligent – WIRED

Technology companies like to portray artificial intelligence as a precise and powerful tool for good. Kate Crawford says that mythology is flawed. In her book Atlas of AI, she visits a lithium mine, an Amazon warehouse, and a 19th-century phrenological skull archive to illustrate the natural resources, human sweat, and bad science underpinning some versions of the technology. Crawford, a professor at the University of Southern California and researcher at Microsoft, says many applications and side effects of AI are in urgent need of regulation.

Crawford recently discussed these issues with WIRED senior writer Tom Simonite. An edited transcript follows.

WIRED: Few people understand all the technical details of artificial intelligence. You argue that some experts working on the technology misunderstand AI more deeply.

KATE CRAWFORD: It is presented as this ethereal and objective way of making decisions, something that we can plug into everything from teaching kids to deciding who gets bail. But the name is deceptive: AI is neither artificial nor intelligent.

Buy this book at:

If you buy something using links in our stories, we may earn a commission. This helps support our journalism. Learn more.

AI is made from vast amounts of natural resources, fuel, and human labor. And it's not intelligent in any kind of human intelligence way. Its not able to discern things without extensive human training, and it has a completely different statistical logic for how meaning is made. Since the very beginning of AI back in 1956, weve made this terrible error, a sort of original sin of the field, to believe that minds are like computers and vice versa. We assume these things are an analog to human intelligence and nothing could be further from the truth.

You take on that myth by showing how AI is constructed. Like many industrial processes it turns out to be messy. Some machine learning systems are built with hastily collected data, which can cause problems like face recognition services more error prone on minorities.

We need to look at the nose to tail production of artificial intelligence. The seeds of the data problem were planted in the 1980s, when it became common to use data sets without close knowledge of what was inside, or concern for privacy. It was just raw material, reused across thousands of projects.

This evolved into an ideology of mass data extraction, but data isnt an inert substanceit always brings a context and a politics. Sentences from Reddit will be different from those in kids books. Images from mugshot databases have different histories than those from the Oscars, but they are all used alike. This causes a host of problems downstream. In 2021, there's still no industry-wide standard to note what kinds of data are held in training sets, how it was acquired, or potential ethical issues.

You trace the roots of emotion recognition software to dubious science funded by the Department of Defense in the 1960s. A recent review of more than 1,000 research papers found no evidence a persons emotions can be reliably inferred from their face.

Emotion detection represents the fantasy that technology will finally answer questions that we have about human nature that are not technical questions at all. This idea thats so contested in the field of psychology made the jump into machine learning because it is a simple theory that fits the tools. Recording people's faces and correlating that to simple, predefined, emotional states works with machine learningif you drop culture and context and that you might change the way you look and feel hundreds of times a day.

That also becomes a feedback loop: Because we have emotion detection tools, people say we want to apply it in schools and courtrooms and to catch potential shoplifters. Recently companies are using the pandemic as a pretext to use emotion recognition on kids in schools. This takes us back to the phrenological past, this belief that you detect character and personality from the face and the skull shape.

You contributed to recent growth in research into how AI can have undesirable effects. But that field is entangled with people and funding from the tech industry, which seeks to profit from AI. Google recently forced out two respected researchers on AI ethics, Timnit Gebru and Margaret Mitchell. Does industry involvement limit research questioning AI?

Visit link:
This Researcher Says AI Is Neither Artificial nor Intelligent - WIRED

The EU path towards regulation on artificial intelligence – Brookings Institution

Advances in AI are making their way across all products and services we interact with. Our cars are outfitted with tools that trigger automatic breaking, platforms such as Netflix proactively suggest recommendations for viewing, Alexa and Google can predict our search needs, and Spotify can recommend songs and curate listening lists much better than you or I can.

Although the advantages of AI in our daily lives are undeniable, people are concerned about its dangers. Inadequate physical security, economic losses, and ethical issues are just a few examples of the damage AI could cause. In response to AI dangers, the European Union is working on a legal framework to regulate artificial intelligence. Recently, the European Commission proposed its first legal framework on Artificial Intelligence. This proposal is the result of a long and complicated work carried out by the European authorities. Previously, the European Parliament had issued a resolution containing recommendations to the European Commission. Before that, the EU legislators enacted the 2017 Resolution and the Report on the safety and liability implications of Artificial Intelligence, the Internet of Things, and Robotics accompanying the European Commission White Paper on Artificial Intelligence in 2020.

In the Resolution of October 20, 2020 on the civil liability regime for artificial intelligence, the European Parliament acknowledged that the current legal system lacks a specific discipline concerning AI-systems liability. According to the legislative body, abilities and autonomy of the technologies make it challenging to trace back specific human decisions. As a result, the person who suffers from damage caused by AI-systems generally cannot be compensated without proof of the operators liability. For this reason, the Resolution formulated a proposal at annex B with recommendations to the European Commission. This proposal has 17 pages, five chapters, and 15 articles.

Following the recommendations of the European parliament, on April 21, 2021, the European Commission developed its proposal for an AI legal framework through a 108-pages and nine annexes. This framework follows a risk-based approach and differentiates the uses of AI according to whether they create an unacceptable risk, a high risk, or a low risk. The risk is unacceptable if it poses a clear threat to peoples security and fundamental rights and is prohibited for this reason. The European Commission has identified examples of unacceptable risk as uses of AI that manipulate human behavior and systems that allow social-credit scoring. For example, this European legal framework would prohibit an AI system similar to Chinas social credit scoring.

The European Commission defined high-risk as a system intended to be used as a security component, which is subject to a compliance check by a third party. The concept of high-risk is better specified by the Annex III of the European Commissions proposal, which considers eight areas. Among these areas are considered high-risk AI systems related to critical infrastructure (such as road traffic and water supply), educational training (e.g., the use of AI systems to score tests and exams), safety components of products (e.g., robot-assisted surgery), and employees selection (e.g., resume-sorting software). AI systems that fall into the high-risk category are subject to strict requirements, which they must comply with before being placed on the market. Among these are the adoption of an adequate risk assessment, the traceability of the results, adequate information on the AI system must be provided to the user, and a guarantee of a high level of security. Furthermore, adequate human control must be present.

If AI systems have a low risk, they must comply with transparency obligations. In this case, users need to be aware that they are interacting with a machine. For example, in the case of a deepfake, where a persons images and videos are manipulated to look like someone else, users must declare that the image or video content has been manipulated. The European Commission draft does not regulate AI systems that pose little or no risk to European citizens, such as AI used in video games.

In its framework, the European Commission adopts an innovation-friendly approach. A very interesting aspect is that the Commission supports innovation through so-called AI regulatory sandboxes for non-high-risk AI systems, which provide an environment that facilitates the development and testing of innovative AI systems.

The Commissions proposal represents a very important step towards the regulation of artificial intelligence. As a next step, the European Parliament and the member states will have to adopt the Commissions proposal. Once adopted, the new legal framework will be directly applicable throughout the European Union. The framework will have a strong economic impact on many individuals, companies, and organizations. Its relevance is related to the fact that its effects could extend beyond the European Unions borders, affecting foreign tech companies that operate within the EU. From this point of view, the need to adopt a legal framework on artificial intelligence appears crucial. Indeed, AI systems have shown in several cases to have severe limitations, such as an Amazon recruiting system that discriminated against women, or a recent accident involving a Tesla car driving in Autopilot mode that caused the death of two men. These examples lead to serious reflection about the need to adopt a legal framework in jurisdictions other than the European Union.

Amazon and Google are general, unrestricted donors to the Brookings Institution. The findings, interpretations and conclusions in this piece are solely those of the authors and not influenced by any donation.

View post:
The EU path towards regulation on artificial intelligence - Brookings Institution

NATO tees up negotiations on artificial intelligence in weapons – C4ISRNet

COLOGNE, Germany NATO officials are kicking around a new set of questions for member states on artificial intelligence in defense applications, as the alliance seeks common ground ahead of a strategy document planned for this summer.

The move comes amid a grand effort to sharpen NATOs edge in what officials call emerging and disruptive technologies, or EDT. Autonomous and artificial intelligence-enabled weaponry is a key element in that push, aimed at ensuring tech leadership on a global scale.

Exactly where the alliance falls on the spectrum between permitting AI-powered defense technology in some applications and disavowing it in others is expected to be a hotly debated topic in the run-up to the June 14 NATO summit.

We have agreed that we need principles of responsible use, but were also in the process of delineating specific technologies, David van Weel, the alliances assistant secretary-general for emerging security challenges, said at a web event earlier this month organized by the Estonian Defence Ministry.

Different rules could apply to different systems depending on their intended use and the level of autonomy involved, he said. For example, an algorithm sifting through data as part of a back-office operation at NATO headquarters in Brussels would be subjected to a different level of scrutiny than an autonomous weapon.

In addition, rules are in the works for industry to understand the requirements involved in making systems adhere to a future NATO policy on artificial intelligence. The idea is to present a menu of quantifiable principles for companies to determine what their products can live up to, van Weel said.

For now, alliance officials are teeing up questions to guide the upcoming discussion, he added.

Those range from basic introspections about whether AI-enabled systems fall under NATOs legal mandates, van Weel explained, to whether a given system is free of bias, meaning if its decision-making tilts in a particular direction.

Sign up for the C4ISRNET newsletter about future battlefield technologies.

(please select a country) United States United Kingdom Afghanistan Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo Congo, The Democratic Republic of The Cook Islands Costa Rica Cote D'ivoire Croatia Cuba Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands (Malvinas) Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Territories Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guinea Guinea-bissau Guyana Haiti Heard Island and Mcdonald Islands Holy See (Vatican City State) Honduras Hong Kong Hungary Iceland India Indonesia Iran, Islamic Republic of Iraq Ireland Israel Italy Jamaica Japan Jordan Kazakhstan Kenya Kiribati Korea, Democratic People's Republic of Korea, Republic of Kuwait Kyrgyzstan Lao People's Democratic Republic Latvia Lebanon Lesotho Liberia Libyan Arab Jamahiriya Liechtenstein Lithuania Luxembourg Macao Macedonia, The Former Yugoslav Republic of Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia, Federated States of Moldova, Republic of Monaco Mongolia Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands Netherlands Antilles New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestinian Territory, Occupied Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Helena Saint Kitts and Nevis Saint Lucia Saint Pierre and Miquelon Saint Vincent and The Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia and Montenegro Seychelles Sierra Leone Singapore Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and The South Sandwich Islands Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Swaziland Sweden Switzerland Syrian Arab Republic Taiwan, Province of China Tajikistan Tanzania, United Republic of Thailand Timor-leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Turkey Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States United States Minor Outlying Islands Uruguay Uzbekistan Vanuatu Venezuela Viet Nam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Western Sahara Yemen Zambia Zimbabwe

Subscribe

By giving us your email, you are opting in to the C4ISRNET Daily Brief.

Accountability and transparency are two more buzzwords expected to loom large in the debate. Accidents with autonomous vehicles, for example, will the raise the question of who is responsible manufacturers or operators.

The level of visibility into of how systems make decisions also will be crucial, according to van Weel. Can you explain to me as an operator what your autonomous vehicle does, and why it does certain things? And if it does things that we didnt expect, can we then turn it off? he asked.

NATOs effort to hammer out common ground on artificial intelligence follows a push by the European Union to do the same, albeit without considering military applications. In addition, the United Nations has long been a forum for discussing the implications of weaponizing AI.

Some of those organizations have essentially reinvented the wheel every time, according to Frank Sauer, a researcher at the Bundeswehr University in Munich.

Regulators tend to focus too much on slicing and dicing through various definitions of autonomy and pairing them with potential use cases, he said.

You have to think about this in a technology-agnostic way, Sauer argued, suggesting that officials place greater emphasis on the precise mechanics of human control. Lets just assume the machine can do everything it wants what role are humans supposed to play?

Read more from the original source:
NATO tees up negotiations on artificial intelligence in weapons - C4ISRNet