Archive for the ‘Alphago’ Category

Experts believe a neuro-symbolic approach to be the next big thing in AI. Does it live up to the claims? – Analytics India Magazine

In their 2009 manifesto, Neural-Symbolic Cognitive Reasoning, Artur Garcez and Luis Lamb, discussed the 1990s popular idea of integrating neural networks and symbolic knowledge. They cited Towell and Shavliks KBANN, Knowledge-Based Artificial Neural Network that uses a system to insert rules, refine and extract data from neutral network; a model empirically proving to be effective. Industry leaders, including contingencies at IBM, Intel, Google, Facebook, and Microsoft, and researchers like Josh Tenenbaum, Anima Anandkumar, and Yejin Choi, are starting to apply this technique in 2022. The recent AI developments, challenges, and stagnancy is leading the industry to consider this hybrid approach to AI modelling.

Neuro-Symbolic AI is essentially a hybrid AI leveraging deep learning neural network architectures and combining them with symbolic reasoning techniques. For example, we have been using neural networks to identify the shape or colour a particular object has. Applying symbolic reasoning to it can take it a step further to tell more exciting properties about the object, such as the area of the object, volume and so on.

AI has been the talk of the town for more than a decade, and while it has stood true to several promises, the majority of the claims are still to be met, and the challenges connected to AI have only been increasing. In the past year, GPT-3 has asked individuals to commit suicide, Alexa has challenged a ten-year-old to touch a coin to the plug, and Facebooks algorithm has identified a man of colour as a primate. This is not any different from Microsofts Tay asserting Hitler was right or Ubers self-driving cars crossing red lights a few years ago. With every GPT-3 development, we have seen a downfall. The present efforts to ensure explainable, fair, ethical and efficient AI need to be supported by changes in how we approach artificial intelligence.

Scientist and AI author and entrepreneur Gary Marcus recently wrote about deep learning hitting a wall and the responsibility of AGI. It must be like stainless steel, stronger and more reliable and, for that matter, easier to work with than any of its constituent parts. No single AI approach will ever be enough on its own; we must master the art of putting diverse approaches together if we are to have any hope at all.

Since AI use learning and reasoning in a quest to be like humans, the neuro-symbolic approach allows us to combine these strengths to make inferences based on the existing neural networks and learn through symbolic representations. Knowledgeable Magazine asserted this hybrid to show duckling-like abilities. Ducklings can imprint colours and shapes and differentiate between them. Moreover, they can differentiate between same and different, an aspect AI still struggles with. Here, symbolic AI would involve symbols for physical objects and colours in its knowledge base. The base also consists of general rules to differentiate between them. This, combined with the deep nets, allows the model to be more efficient.

The combinations demand humans supply a knowledge base/symbolic rules for the AI to leverage while the automated deep nets find the correct answers. The hybrid uses deep nets, instead of humans, to generate only those portions of the knowledge base that it needs to answer a given question, notes Knowledgeable Magazine.

DeepMind has seen some of the best success with their board-game playing AI models Go, Chess, MuZero and more. These are hybrid models using symbolic AI. For instance, AlphaGo used symbolic-tree search with deep learning, AlphaFold2 combines symbolic ways of representing the 3-D physical structure of molecules with the data-trawling characteristics of deep learning. Deepmind has asserted the qualities of symbolic learning in AI in a recent blog post. This approach will allow for AI to interpret something as symbolic on its own rather than simply manipulate things that are only symbols to human onlookers, they said. This allows for AI with human-like fluency. IBM has asserted neuro- symbolic AI is getting AI to reason. The LNN technique, Logical Neural Network, was introduced and created on the foundations of deep learning and symbolic AI. Given the combination, the software can successfully answer complex questions with minimal domain-specific training.

Two major conferences have been held asserting this needthe 2019 Montreal AI Debate between Yoshua Bengio and Gary Marcus and the AAAI-2020 fireside conversation with Laureate Daniel Kahneman, Geoffrey Hinton, Yoshua Bengio and Yann LeCun. The key takeaway from these events was the need for AI to have a reasoning layer with deep learning to frame a rich future of AI.

Excerpt from:
Experts believe a neuro-symbolic approach to be the next big thing in AI. Does it live up to the claims? - Analytics India Magazine

Measuring Attention In Science And Technology – Forbes

Soccer fans cheering

Most people alive today who are not in science or medicine will not be able to quickly recall just ten biologists or ten chemists and their discoveries. Most of them can have a happy life without that knowledge. But may will easily recall ten popular sports people, singers, actors, and even the members of the Kardashian family. Some will recall celebrity scientists from the past including Einstein, Newton, Tesla. Celebrity scientists nowadays are very rare. In this long article I would like to explore some of the reasons why there is a lack of celebrity scientists, why do we need more of them, and how to track the popularity of academic papers.

circa 1955: Mathematical physicist Albert Einstein (1879 - 1955) delivers one of his recorded ... [+] lectures. (Photo by Keystone/Getty Images)

Have you ever heard someone being told that they have a memory like a goldfish? Ever wondered what this expression even means? It means that they tend to forget something rather quickly or that they have a short memory span. But the truth is that this expression is a myth. Goldfish do not have a 3-second memory span like we often hear. At least not according to this article by Live Science, which claims that some goldfish have a memory span of 2 seconds and others, 10 seconds but its always short. But humans are not goldfish. We have a capacity to retain various types of information for different lengths of time. Short-term memories last seconds to hours, while long-term memories last for many years. According to this article by National Geographic, we also have a working memory that allows us to keep something in our minds for a limited time by repeating it. Whenever you read an academic article over and over again to memorize it, youre using your working memory. Another way to categorize memories is by the subject of the memory itself. You tend to remember things you most deeply and passionately care about. Everyone cares about different things and so they retain information differently. I believe the most important type of information that we must retain is in scientific knowledge and discoveries. It is sometimes frustrating to see people forget about some of the most famous modern-day scientists in the world while choosing to remember certain celebrities who are famous for being famous.

NEW YORK, NY - FEBRUARY 11: (L-R) Khloe Kardashian, Lamar Odom, Kris Jenner, Kendall Jenner, ... [+] Kourtney Kardashian, Kanye West, Kim Kardashian, Caitlin Jenner and Kylie Jenner attend Kanye West Yeezy Season 3 on February 11, 2016 in New York City. (Photo by Jamie McCarthy/Getty Images for Yeezy Season 3)

For example, you may constantly hear about the Kardashians on the news or on social media, but when was the last time you heard about Rosalind Franklin? Or John Ray? Or John Logie Baird? Rosalinds work was instrumental in the understanding of the molecular structures of DNA; John Ray published important works on botany, zoology, and natural theology; and John Baird demonstrated the world's first live working television system. The work done by the former two is crucial to our understanding of humans, whereas the invention of the latter is something that we rely on to spread information. The truth is that while theres nothing wrong with seeing celebrities dominating the news cycle in todays day and age, we must also not forget those who have contributed so much to science.

CEO of Tesla Motors and SpaceX, Elon Musk speaks to a crowd of people who had gathered to ... [+] participate in Tesla Motors test drives outside the Texas Capitol building in Austin on January 15th, 2015. Tesla manufactures and sells electric cars. (Photo by Robert Daemmrich Photography Inc/Corbis via Getty Images)

It was not until Elon Musk entered the game and started promoting hardcore science in the consumer space that made him into the first celebrity-scientist-entrepreneur that smart kids put on their walls as a role model. Still, very few people know about Elons early successes like Zip2 and X.com. Even today, most people know about Elon because of SpaceX and Tesla. Neil deGrasse Tyson, a highly accomplished astrophysicist, gained more attention after he popularized science with such books as The Pluto Files and through his frequent appearances on television. Bill Nye had to host the science television show Bill Nye the Science Guy and the Netflix show Bill Nye Saves the World to get famous. Few people know about his career as a mechanical engineer for Boeing. Fewer know about his many patents.

Lee Se-Dol (C), one of the greatest modern players of the ancient board game Go, arrives before the ... [+] third game of the Google DeepMind Challenge Match against Google-developed supercomputer AlphaGo at a hotel in Seoul on March 12, 2016. / AFP / JUNG Yeon-Je (Photo credit should read JUNG YEON-JE/AFP via Getty Images)

When it comes to companies, DeepMind, a subsidiary of Alphabet that builds advanced AI, managed to achieve greatness and fame by combining science with carefully-crafted PR. In 2017, DeepMind released a documentary called AlphaGo: The Movie, which helped inspire a generation of scientists as well as the general public. The documentary is about a five-game Go match between world champion Lee Sedol and AlphaGo, a computer program that combines advanced search tree with deep neural networks. These neural networks take a description of the Go board as an input and processes it through a number of different network layers containing millions of neuron-like connections (Spoiler alert: The AI wins). Not only did the documentary receive multiple awards and nominations, but it also inspired people to work on AI.

Altmetric page of an academic paper

Very often, we see truly amazing scientific achievements published in peer-reviewed journals, conferences, or on pre-print servers. These are largely ignored by the mainstream public. To help measure attention beyond academia, Nature Publishing Group adopted a score called Altmetric.

According to its website, Altmetric are metrics and qualitative data that are complementary to traditional, citation-based metrics. They can include peer reviews on Faculty of 1000, citations on Wikipedia and in public policy documents, discussions on research blogs, mainstream media coverage, bookmarks on reference managers like Mendeley, and mentions on social networks such as Twitter. Sourced from the internet, Altmetric can tell about how often journal articles and other scholarly outputs like datasets are discussed and used around the world. Altmetric has been incorporated into researchers websites, institutional repositories, journal websites, and more.

Altmetric - the measure of attention of academic papers

In order to track online attention for a specific piece of research, Altmetric requires the following three things: An output (journal, article, dataset, etc), an identifier attached to the output (DOI, RePEc, etc), and mentions in a source that it tracks.

Once it tracks a mention of the research, Altmetric collates it together with any other online attention it has seen for that item. It then displays it via the Altmetric details page, along with its unique donut and automatically calculated Altmetric Attention Score. Altmetric also includes the technological capacity to track items according to their unique Unform Research Identifier. So far, Altmetric has worked with institutions, funders, think-tanks and publishers to track attention to their press releases, grey literature and company reports.

The Nature Index is another great database of author affiliations and institutional relationships. It tracks roughly 60,000 research articles per year from 82 high-quality natural science journals and provides absolute and fractional counts of article publication at the institutional and national level. The Nature Index was conceived by Nature Research, one of the worlds leading multidisciplinary science journals. In total, more than 10,000 institutions are listed in the Nature Index.

Here's how the Nature Index works.

According to its website, the Nature Index uses article count (called Count) and fractional count (called Share) to track research output. A country/region or an institution is given a Count of 1 for each article that has at least one author from that country/region or institution. This is the case regardless of the number of authors an article has, and it means that the same article can contribute to the Count of multiple countries/regions or institutions.

To look at a countrys, a regions or an institutions contribution to an article, and to ensure they are not counted more than once, the Nature Index uses a fractional count, referred to as Share, which takes into account the share of authorship on each article. The total Share available per article is 1, which is distributed among all authors under the assumption that each contributed equally. For instance, an article with 10 authors means that each author receives a Share of 0.1. For authors who are affiliated with more than one institution, the authors Share is then split equally between each institution. The total Share for an institution is calculated by summing the Share for individual affiliated authors. The process is similar for countries/regions.

Other publishers have tried to copy their model but Altmetric remains the de-facto standard. And some of the concepts of Altmetric are integrated into many other places. For example, at Insilico Medicine in PandaOmics we track the industry's attention to protein target using a tool called TargetMetric, which also presents a flower composed of the various sources of attention. It looks like Altmetric inspired many companies to represent attention in the flower format.

While very often more conservative scientists criticize scientific PR and optimization of the papers for higher Altmetric and often sensational nature of the announcements that help achieve higher Altmetric Attention Scores, it is extremely important to understand that the kids who are growing up today live in a new world of technology that is optimized to grab their attention and compete for their eyeballs. So it is important to compete not with the other scientists but with non-value-adding celebrities and noise and get the attention back into the real world of science. Like I said in the beginning of this article, humans are not goldfish; we have a capacity to retain various types of information for different lengths of time. We should use this capacity to learn more about science and the minds behind recent scientific developments.

Since inception Altmetric fascinated me. I think that it is our duty as scientists to popularize the scientific achievements of our own teams and the achievements of others and inspire the people to get into biomedicine. Tracking attention for scientific papers is not an easy task and Nature did not disappoint. Other publishers tried to launch similar systems that pale in comparison in terms of the quality, scale, and presentation. Hence, I was very pleased when Katherine Christian, CEO of Altmetric, agreed to answer a couple of my questions.

Alex: In your opinion, what the scientists can do to increase the popularity of their articles?

Kathy Christian: We don't advocate increasing the popularity of research simply to increase the Altmetric Attention Score. When it comes to research attention, we encourage researchers to think about the aim of their research - what outcomes are they looking for - and to focus on the specific attention sources (e.g. policy, news, twitter) that are going to be most helpful in achieving those outcomes. For example, a researcher working on new ways for governments to reduce carbon emissions is most likely going to focus on attention in policy documents, whereas a researcher working on effective management of diabetes may be more focused on twitter and facebook with the aim of reaching patients. One tip we suggest is to search for research in your field that has been successful in reaching the audiences that you're keen to reach and review their attention profile to help develop your communication strategy. For some more detailed tips we have a blog posts on this topic, 'A quick and dirty guide to building your online reputation' and a short guide; '10 clever tips for promoting your research online' .

Alex: Are you planning to add any additional sources to Altmetric including TikTok and other social media? It looks like these channels are picking up in popularity.

Kathy Christian: We are continually evaluating where people are speaking about research in an effort to increase the diversity and coverage of our data; some attention sources are more challenging to track as they either do not link back to research or the platforms are closed. TikTok is a good example of an emerging source for research discussion that is also difficult to track. The technology required to detect mentions within a purely audiovisual space, such as TikTok, is still in its early stages, so collecting attention with a high enough level of confidence would be incredibly challenging. While YouTube is also an audiovisual space, in that case we're able to rely on the 'description' section and search for links back to research outputs.

Screenshot of Altmetric Top 100 - 2018-2020

To highlight the most popular research papers, every year Altmetric publishes the Top 100 lists. There is a blog post, press release, and a podcast accompanying the annual release. And if you want to analyze the trends by yourself, there is also an open ready-to-crunch dataset.

Unsurprisingly, 2020 list was dominated by the COVID-19 papers. But if you look deeper at the most popular papers, these are either the papers written by celebrities, or papers covering popular dinner-table topics such as diet, fake news, and climate change. The exciting biology and chemistry papers are generally lagging behind.

See more here:
Measuring Attention In Science And Technology - Forbes

The Discontents Of Artificial Intelligence In 2022 – Inventiva

The Discontents of Artificial Intelligence in 2022

Recent years have seen a boom in the use of Artificial Intelligence. This review essay is divided into two parts: part I introduces contemporary AI, and part II discusses its implications. Part-II will be dedicated to the widespread and rapid adoption of artificial intelligence and its resulting crises.

In recent years, Artificial Intelligence or AI has flooded the world with applications outside of the research laboratory. There are now a number of standard Artificial Intelligence techniques, including face recognition, keyboard suggestions, Amazon recommendations, Twitter followers, image similarity search, and text translation. Artificial intelligence is also being applied in areas such as radiological diagnostics, pharmaceutical drug development, and drone navigation far removed from the ordinary user. Therefore, artificial intelligence is the new buzzword of the day and is seen as a portal to the future.

In 1956, John McCarthy and others conceptualized a summer research project aimed at replicating human activity. It is thought that this led to the discipline of artificial intelligence. In the beginning, these pioneers worked under the premise that every aspect of learning or intelligence could be so precisely described that it could be simulated by a machine.

Although the objective was ambitious, board games have often been used to test artificial intelligence methods due to pragmatic considerations. Board games have precise rules that can be encoded into a computational framework, so playing board games with skill is a hallmark of intelligence.

Earlier this year, a program called AlphaGo created a sensation by defeating the reigning Go champion. The program was developed by DeepMind, a Google company.

Gary Kasparov, then the world chess champion, was shocked by IBMs Deep Blue in a celebrated encounter between humans and machines in 1997. Kasparovs defeat was unnerving as it was the breach of a frontier in chess, which is traditionally thought of as a cerebral game. The notion that a machine could defeat the world champion at the board game of Go was considered to be an unlikely dream at the time. Based on this belief, the number of possible move sequences in Go is very much more significant than those in chess and Go played on a much larger board than chess.

Nevertheless, in 2016 a computer program made headlines after it defeated the reigning world Go champion, Lee Sedol, using a program developed by DeepMind, a company owned by Google. By 1997, commentators celebrated this victory as the beginning of a new era in which machines would eventually surpass humans in intelligence.

The reality was completely different. By any measure, AlphaGo was a sophisticated tool, but it could not be considered intelligent. While it was able to pick the best move at any time, the software did not understand the reasoning behind its choices.

In AI, a key lesson is that machines can be endowed with abilities previously possessed only by humans, although they do not have to be intelligent in the same way as sentient beings. The case of arithmetic computation is one non-AI example. The task of multiplying two large numbers was a difficult one throughout history.

Logarithm tables had to be painstakingly produced to accomplish these tasks, which required a lot of human effort. Even the most straightforward computer can now perform such calculations efficiently and reliably for many decades now. The same can be said about virtually any human task involving routine operations that can be solved with AI.

In addition, AI is beginning to make inroads into the domains of science and engineering, where domain knowledge is required. Healthcare is one such area.

Todays AI will be able to extend the above metaphor beyond simple, routine tasks to more sophisticated ones with unprecedented advances in computing power and data availability. Millions of people are already using AI tools. Nonetheless, AI is starting to make headway in areas like science and engineering, where domain knowledge is involved.

A place of universal relevance includes healthcare, where AI tools can be used to assess a persons health, provide a diagnosis based on clinical data, or analyze large-scale study data. Using artificial intelligence for solving highly complex problems such as protein folding or fluid dynamics has been developed recently in more esoteric fields. Such advances are expected to have a multitude of practical applications in the real world.

History

Many early AI works centred around symbolic reasoning laying out a set of propositions and logically deducing their implications. However, this enterprise soon ran into trouble as enumerating all the operational rules in a specific problem context was impossible.

A competing paradigm is a connectionism, which aims to overcome the difficulty of describing rules explicitly by inferring them implicitly through data. An artificial neural network is created based on the strength (weight) of connections between neurons, loosely based on the properties of neurons and their connectivity in the brain.

A number of leading figures have claimed a definitive solution to the problem of computational intelligence is imminent, based on one paradigms success or another. In spite of progress, the challenges proved far more complex, and the hype was typically followed by a period of profound disillusionment and a significant reduction in funding for American academics-a period referred to as the AI winter.

Thus, DeepMinds recent success should serve as an endorsement of its approaches as they could help society find answers to some of the worlds most pressing and fundamental scientific problems. If the reader is interested in the critical concepts in AI, as well as the background of the field and its boom-bust cycles, two recently published popular expositions written by long-term researchers may be of interest.

These are Melanie Mitchells Artificial Intelligence: A Guide for Thinking Humans (Pelican Books, 2019) and Michael Wooldridges The Road to Conscious Machines: The Story of Artificial Intelligence (Pelican Books, 2020).

Artificial Intelligence has been confronted with two issues of profound significance since its inception. While it is impressive to defeat a world champion at their own game, the real world is a much messier environment than the one in which ironclad rules govern everything.

Due to this reason, the successful AI methods developed to solve narrowly defined problems cannot be generalized to other situations involving diverse aspects of intelligence. Developing the ability to use ones hands for delicate tasks is an essential skill that a child learns effortlessly through robotics research.

Although AlphaGo worked out the winning moves, its human representative had to reposition the stones on the board, a seemingly mundane task. Intelligence isnt defined by a single skill like winning games because intelligence is a whole lot more than the sum of its parts. It encompasses, among other things, the ability to interact with ones environment, which is one of the essentials of embodied behaviour.

One of the most essential skills that a child develops effortlessly is that of using their hands to perform delicate tasks. Robotics has yet to develop this skill.

Moreover, the question of how to define intelligence itself looms more considerable and more significant than how AI tools can overcome the technical limitations. Researchers often assume that approaches developed to tackle narrowly defined problems like winning at Go can be used to solve more general intelligence problems. There has been scepticism towards this rather brash belief, both from those within the community as well as from older disciplines like philosophy and psychology.

Intelligence has been the subject of heavy debate regarding its ability to be substantially or entirely captured in a computational paradigm or whether it is irreducible and ineffable. Hubert Dreyfus well-known 1965 report entitled Alchemy and Artificial Intelligence reveal the disdain and hostility some people feel towards AI claims. Dreyfus views were called a budget of fallacies by a well-known AI researcher.

AI is also viewed with unbridled optimism that it can transcend biological limitations, a notion known as Singularity, thereby breaking all barriers. The futurist Ray Kurzweil claims that machine intelligence will overwhelm human intelligence as the capabilities of AI systems grow exponentially. Kurzweil has attracted a fervent following despite his ridiculous argument regarding exponential growth in technology. It is best to consider Singularity as a kind of technological rapture without intellectual severe foundations.

Intelligence has been a bone of contention for decades, primarily about whether it can be wholly or essentially captured through computations or if it has an ineffable, irreducible human core.

Stuart Russell, the first author of the most widely used textbook on artificial intelligence, is an AI researcher who does not shy away from defining intelligence. Humans are intelligent to the extent that they can be expected to reach their objectives (Russell, Human Compatible, 9). Machine intelligence can be defined in the same way. An approach such as this does help pin down the elusive notion of intelligence, but as anyone who has read about utility in economics can attest, it falls back on an accurate description of our goals to provide the meaning.

The style of Russell differs significantly from the writing of Mitchell and Wooldridge: he is terse and expects his readers to keep engaged; he gives no quarter. Although Human Compatible is a highly thought-provoking book, it also possesses a personal narrative that jumps from flowing argument to the abstruse hypothesis.

A recent study found that none of the hundreds of AI tools developed for detecting Covid was effective.

Additionally, Human Compatible differs significantly from other AI expositions by examining the dangers of future AI surpassing human capabilities. While Russell avoids evoking dystopian Hollywood imagery, he does argue that AI agents might combine to cause harm and accidents in the future. He points to the story of Leo Szilard, who figured out the physics of nuclear chains after Ernest Rutherford had argued that the idea of atomic power was moonshine and warned against the belief that such an eventuality was highly unlikely or impossible.

After that, nuclear warfare unleashed its horrors. Human Compatible focuses on guarding against the possibility of AI robots taking over the world. Wooldridges argument is not convincing here. The decades of AI research suggest that human-level AI differs from a nuclear chain reaction that can be described as a simple mechanism (Wooldridge, The Road to Consciousness, 244).

It is enriching but ultimately undecidable to debate the nature of intelligence and the fate of humanity in philosophy. Most researchers in AI research are focused on specific problems and are indifferent to more significant debates due to the two distinct tracks of cognitive science and engineering. Unfortunately, the objectives and claims of these two approaches are often conflated in the public discourse, leading to much confusion.

Relevantly, terms like neurons and learning have a mathematical meaning within the discipline. However, they are immediately associated with their commonsense connotation, leading to severe misunderstandings about the entire enterprise. The concept of a neural network is not the same as the concept of the human brain, and learning is a broad set of statistical principles and methods that are essentially sophisticated curve fitting and decision rule algorithms.

It has almost completely replaced other methods of machine learning since deep learning was discovered nearly a decade ago.

It was considered ineffective a few decades ago to develop neural networks that could learn from data. With the development of deep learning, neural networks garnered renewed interest in 2012, leading to significant improvements in image and speech recognition methods. Currently, successful AI methods such as AlphaGo and its successors and widely used tools such as Google Translate employ deep learning, in which the adjective does not signify profundity but rather a multiple layering of the network.

Deep understanding has been sweeping many disciplines since it was introduced over a decade ago, and it is now nearly wholly replacing other methods of machine learning. Three of its pioneers received the Turing Award in 2018, the highest honour in the field of computer science, anointing their paradigmatic dominance.

Success in AI is accompanied by hype and hubris. In 2016, Geoff Hinton, one of the Turing trio, stated: We should have ceased training radiologists by now, because it will become clear in five years that deep learning will provide better outcomes than radiologists. The failure to deliver us from flawed radiologists and other problems with the method did not hinder Hinton from stating in 2020 that deep learning will be able to do everything. In addition, a recent study concluded that none of the hundreds of AI tools developed for finding Covid was effective.

AI follows success with hype and hubris as an iron law.

Our understanding of the nature of contemporary learning-based AI tools will be enhanced by looking at how they are developed. As an example, consider detecting chairs from images. Various components of a chair can be observed: legs, backrests, armrests, cushions, etc. All of these combinations are recognizable as chairs, so there are potentially countless combinations of such elements.

Other things, such as bean bags, can trump any rule we may formulate about what a chair should contain. Methods such as deep learning seek to overcome precisely the limitations of symbolic, rule-based deduction. We may collect a number of images of chairs and other objects instead of trying to define rules that cover all of their varieties and feed these into a neural network along with the correct output (output of chair vs non-chair).

A deep learning approach would then modify the weights of the connections in the network in the training phase to mimic as best as possible the desired input-output relationships. Basically, the network will now be able to answer the question of whether previously unseen test images contain chairs if it has been done correctly.

For a chair-recognizer of this nature, many images of chairs of different shapes and sizes are needed. As an extension of that analogy, one may now consider any number of categories one can imagine, including chairs, tables, trees, people, and so on, all of which appear in the world in a variety of glorious but maddening variety. As a result, it is essential to acquire adequately representative images of objects.

It has been shown that deep learning methods can work extraordinarily well, but they are often unreliable and unpredictable.

A number of significant advances were made in 2012 in automatic image recognition thanks to the combination of relatively cheap and powerful hardware, as well as the rapid expansion of the internet, which enabled researchers to build a large dataset, known as ImageNet, containing millions of images labelled with thousands of categories.

Despite working well, deep learning methods are unreliable when it comes to their behaviour. Suppose, for example; an American school bus is mistaken for an ostrich due to tiny changes in images that cannot be seen by the human eye. Additionally, it is recognized that sometimes incorrect results can arise from spurious and unreliable statistical correlations rather than from any deep understanding.

When a boat is shown in an image that is surrounded by water, it is correctly recognized. A ship is not modelled or envisioned in the method. The limitations and problems of AI may have typically been academic concerns in the past. In this case, however, it is different since a number of AI tools have been taken from the laboratory and deployed into real life, often with grave consequences.

Due to a relentless push towards automation, a number of data-driven methods have already been developed and deployed locally, including in India, well before deep learning became a fad. Among the tools that have achieved extraordinary notoriety is COMPAS, which is used by US courts to determine the appropriate sentence length based upon the risk of recidivism.

A tool such as this uses statistics from existing criminal records to determine a defendants chances of committing a crime in the future. The device, even without explicitly biasing itself against black people, resulted in racial bias in a well-known investigation. When judges use artificial intelligence to predict sentence length, they discriminate based on race.

For biometric identification and authentication, fingerprints and face images are even more valuable. Many law enforcement agencies and other state agencies have adopted face recognition tools due to their utility in surveillance and forensics. Affective computing and other dubious techniques for detecting emotion have also been used in a number of contexts, including employment decisions as well as more intrusive surveillance methods.

A number of necessary studies have shown that many face recognition programs available in the commercial sector are profoundly flawed and discriminatory. A recent audit of commercially available tools revealed that black women could experience face recognition error rates as high as 35% higher than white women, causing growing calls for their halt. In India and China especially, face and emotion recognition is becoming more widespread and is having tremendous implications for human rights and welfare. This deserves a much more thorough discussion than the one presented here.

Various sources of bias result from relying on real-world data for decision making. Many of these sources can be grouped under the heading of bias. Face recognition suffers from a bias caused by the low number of people of colour in many datasets used to develop the tools. Another limitation is the limited relevance of the past for defining the contours of the society we want to build. If an AI algorithm relies on past records, as is done in the US recidivism modelling, it would disparately harm the poor since they have historically experienced higher incarceration rates.

Additionally, if one were to consider automating the hiring process for a professional position in India, models based on past hirings would automatically lead to caste bias, even if caste was not explicitly considered. As Cathy ONeil details in her famous book, Weapons of Math Destruction: How Big Data is Increasing Inequality and Threatening Democracy (Penguin Books, 2016), which details a number of such incidents in the American context, her argument here can be summarized as follows:

Likewise, models based on past hires in India would automatically result in caste bias if one were to automate hiring people for, say, a professional position.

Artificial intelligence methods do not learn from the world directly but from a dataset as a proxy. A lack of ethical oversight and the lack of design of data collection have long plagued AI research in academia. Scholars from a range of disciplines have put a great deal of effort into developing discussions of bias in AI tools and datasets, including their ramifications in society, particularly among those who are poor and traditionally discriminated against.

Additionally, many modern AI tools are impossible to reason about or interpret, in addition to bias. Since those who are affected by a decision often have a right to know the reasoning used to arrive at a conclusion, the problem of explainability has profound implications for transparency.

Within the computer science community broadly, there has been an interest in formalizing these problems, which has led to academic conferences and an online textbook in preparation. An essential result of this exercise has been a theoretical understanding of the impossibility of fairness, which is a result of multiple notions of fairness not all being possible to satisfy simultaneously.

Research and practice in AI should also consider the trade-offs involved in designing software and the societal implications of these choices. The second part of this essay will show, however, that these considerations are seldom adequate as the rapid expansion of contemporary AI technology from the research lab into daily life has unleashed a wide range of problems.

Like Loading...

Related

Read the original:
The Discontents Of Artificial Intelligence In 2022 - Inventiva

Is AI the Future of Sports? – Built In

He sees an opening on the left wing and immediately punishes them. After rushing down the side, he looks for his teammates in the center and quickly makes the cross in to finish it off!

Turn on any sports channel and youll hear something similar. Chances are you pictured Ronaldo or another star player running down a fresh pitch. In fact, this could actually describe a play from an artificial intelligence bot in a recent international tournament. Its time to shift our thinking as AI becomes the star player.

As we already know, using AI to enhance human athlete performance is becoming a pervasive practice. The next step for AI in sports is introducing AI players. In fact, we currently have AI agents smart enough to mimic high-level human tactics. They have the potential to revolutionize the sports industry while pushing the envelope regarding what AI can really do.

The immediate response from many people is that such a world will never come to be how could we enjoy watching machines? Many claim that playing against traditional AI can often be a repetitive and boring experience. Others cant imagine any joy from beating their machine opponents. To address this, lets start by examining why we like traditional sports and then outline how AI will come to meet these demands.

More From Jye Sawtwell-RicksonRage Against the Machine Learning: My War With Recommendation Engines

Sports fan psychologists have nailed down eight core reasons why people love their sports.

Many of the motivations mentioned above arent unique to traditional sports. For example, getting together with friends and family to bond is about the people, not about the sport. As such, if the conditions are right, a similar variant involving AI could make inroads into the industry.

More Built In Sports!Is Football Ready for a Tech-Driven Revolution?

The adoption of AI into the world of sports will be slower than other AI and software applications. Many of the motivations of sports relate to how others around an individual think and behave, so its not enough to change a few people; you need to change preconceptions around an entire industry to be truly effective. Here are four ways were already seeing AI infiltrate sports and how those applications appeal to our existing interest in sports:

Firstly, AI must be able to compete with humans for humans to get interested. We can already see AIs competitive edge with some of our most complex board games and e-sports. Here are some key cases:

These are all examples of deep learning AI, where strategies are not pre-programmed, but learned. Deep learning systems consist of up to billions of individual parameters which are layered together to create a complex network. Some goal is defined for the system, such as winning in a simple two-player game, which the system can begin to optimize toward. This optimization process happens through machine-based trial and error. The system plays millions of games with itself, each time learning about what works and what doesnt, and adjusting its parameters. After all these games, the system will have (hopefully) learned to play at or above its human counterparts, which is exactly what weve seen with the games mentioned above. This brings us to the wild world of e-sports.

Looking for More Like This? We Got You.Taking Athletics To The Next Level Using AI

Our robotics capabilities are still somewhat limited, as seen in various robotic games such as soccer. It will still be some time before we can apply AI players to most traditional sports (though Boston Dynamics is getting there quickly). Instead, AI is likely to become most common in the world of e-sports.

E-sports is quickly becoming comparable (in terms of market share) to traditional sports. The industry has eclipsed $1 billion in revenue in 2021 and has a projected 15 percent year-over-year growth. The largest team in e-sports, Cloud 9, had a valuation of over $300 million, which equates to five percent of the worlds largest sports franchise, the Dallas Cowboys, at $7 billion. In prize pools, e-sports already exceed many, including the Golf Masters and Confederations Cup, at over $40 million.

The key thing to note is that e-sports are still relatively new. As opposed to traditional sports, some of which have franchises that are over a century old and have been big businesses for over 30 years, e-sports only began 25 years ago and the most popular game, Dota 2, was released just 10 years ago. The size of the prize pools contrasted with the young e-sports shows how quickly the industry has grown. Once this continued growth hits a critical mass and breaks into the mainstream, e-sports may provide similar family and group affiliation motivation that we see in traditional sports.

Consider that FIFA now runs an international tournament of e-sports for their very own games. For fans at home, the experience is largely the same, watching the same match on the same television with the same live commentary. Granted, the animation of the current games still has room for improvement, but it improves every year with new games. The rapidly advancing animations, along with the fact that theyre AI-generated, allow for far greater creativity. For example, you can watch in 3D and experience being in play or maybe even in the referee's shoes. The fact that the worlds most lucrative sport (soccer) is already moving into e-sports, so it wont be long before others follow.

There are other reasons e-sports make a good first choice for those interested in AI games, such as the ability to more efficiently train and improve AI. For a computer game, AI can play millions of games (e.g. 5 million games for AlphaGo) for training as opposed to traditional sports where AI must physically play the game to learn strategy and test its performance (and even this limitation is something OpenAI is working on).

A Curveball . . .Sabermetric Sales: 5 Actionable Strategies From Moneyball for CROs

Right now, if someone asks you to watch two programs compete against each other inside another program, you might think theyre a little weird. This is a reasonable reaction, but like it or not, AI competitions are becoming more and more mainstream.

There are various competitions between AI that garner millions of viewers. Heres a list of various games and AI representations on YouTube which already have large audiences.

Overall, this is on the order of 100 million views on YouTube, which was only around two percent of one day of streaming(as of 2017). However, given the relatively small community this number is significant. Coupling the growth of AI bots with the growth in e-sports will create massive expansion in the genre as a whole. However, this growth wont be sustainable unless the AI stays interesting.

Once watching AI compete becomes common, well need to find new ways to keep viewers involved. In order to achieve this, its critical that we diversify our AI. People dont want to watch the same thing over and over again. As previously mentioned, one of the motivators for watching sport is entertainment which comes from the chance factor of not knowing who will walk out victorious on any given day. In order to achieve this, the agents must be capable of making various high-level, non-straightforward plays (which weve already seen with Dota 2 and Go, to name a few).

In fact, theres a common misconception that watching AI is a boring experience as they unintelligently copy humans or follow pre-described rulesets. Certainly that was true of machines of the past, but for many years now weve had AI that can act in creative and all-together astonishing ways.

One of the most interesting parts about Googles AlphaGo was its creativity and ways it played that game that were unexpected by humans. Along the same line, in the world of chess, when human players make moves that vary from the standard procedure, referees start to suspect players of using artificial intelligence systems as assistants. Put another way, in the game of chess, creativity is no longer the mark of a human, but that of a machine. Its the same in Go and as time passes, it will become true in other sports, too.

During the AlphaStar training, the Deepmind team observed that the bots adopted various good strategies. One might expect that the bots followed a specific strategy and got better and better at it in time. In fact, the bots could be clumped into various groups and each group had a different way to play the game (e.g. aggressive start, focus on a certain type of units, etc.). In a way, each bot had its own player personality. These personalities, with varied play-styles will keep AI sports both interesting and entertaining for human viewers.

More on Games, AI and Deep LearningHow AI Teach Themselves Through Deep Reinforcement Learning

Once AI agents have become a regular part of our sporting experience, advancements in robotics will catch up, allowing them to play all of the games we usually play, not just for us, but with us. Soccer players will be able to practice against full teams of AI bots that are set to challenge them and help them grow. Theyll also be able to compete in human-robot leagues.

While human biology is relatively fixed, robotics will continue to advance. This means that sports can continue to evolve too. Imagine a game of soccer played at double the pace with a magnetic ball and speeds matching that of tennis? Sounds pretty exciting to me.

Finally, new games can be created that only AI can perfect. As previously mentioned, escape and aesthetic are two of the motivators for sports fans. Watching an AI empowered machine conquer and handle complex games will create a feeling of escape weve never experienced before.

Robots Dont Get CovidI Was President of StubHub When Covid Tanked Our Industry

If the above story comes to be, there would naturally be significant impacts on sports and entertainment.

Sports organizations and related companies should start preparing for these changes before its too late. For the rest of us, likely not much will change. We cant hope to imitate Cristiano Ronaldos beautiful strikes or Federers impossible serves and I wont be able to match the feats of our robotic future athletes. If nothing else, it will be interesting to see how sports evolve in the wake of AI development. So for now, Ill sit back, pick a side and enjoy the game with my friends.

Read more:
Is AI the Future of Sports? - Built In

This is the reason Demis Hassabis started DeepMind – MIT Technology Review

Hassabis has been thinking about proteins on and off for 25 years. He was introduced to the problem when he was an undergraduate at the University of Cambridge in the 1990s. A friend of mine there was obsessed with this problem, he says. He would bring it up at any opportunityin the bar, playing pooltelling me if we could just crack protein folding, it would be transformational for biology. His passion always stuck with me.

That friend was Tim Stevens, who is now a Cambridge researcher working on protein structures. Proteins are the molecular machines that make life on earth work, Stevens says.

Nearly everything your body does, it does with proteins: they digest food, contract muscles, fire neurons, detect light, power immune responses, and much more. Understanding what individual proteins do is therefore crucial for understanding how bodies work, what happens when they dont, and how to fix them.

A protein is made up of a ribbon of amino acids, which chemical forces fold up into a knot of complex twists and twirls. The resulting 3D shape determines what it does. For example, hemoglobin, a protein that ferries oxygen around the body and gives blood its red color, is shaped like a little pouch, which lets it pick up oxygen molecules in the lungs. The structure of SARS-CoV-2s spike protein lets the virus hook onto your cells.

COURTESY OF DEEPMIND

The catch is that its hard to figure out a proteins structureand thus its functionfrom the ribbon of amino acids. An unfolded ribbon can take 10^300 possible forms, a number on the order of all the possible moves in a game of Go.

Predicting this structure in a lab, using techniques such as x-ray crystallography, is painstaking work. Entire PhDs have been spent working out the folds of a single protein. The long-running CASP (Critical Assessment of Structure Prediction) competition was set up in 1994 to speed things up by pitting computerized prediction methods against each other every two years. But no technique ever came close to matching the accuracy of lab work. By 2016, progress had been flatlining for a decade.

Within months of its AlphaGo success in 2016, DeepMind hired a handful of biologists and set up a small interdisciplinary team to tackle protein folding. The first glimpse of what they were working on came in 2018, when DeepMind won CASP 13, outperforming other techniques by a significant margin. But beyond the world of biology, few paid much attention.

That changed when AlphaFold2 came out two years later. It won the CASP competition, marking the first time an AI had predicted protein structure with an accuracy matching that of models produced in an experimental laboften with margins of error just the width of an atom. Biologists were stunned by just how good it was.

Watching AlphaGo play in Seoul, Hassabis says, hed been reminded of an online game called FoldIt, which a team led by David Baker, a leading protein researcher at the University of Washington, released in 2008. FoldIt asked players to explore protein structures, represented as 3D images on their screens, by folding them up in different ways. With many people playing, the researchers behind the game hoped, some data about the probable shapes of certain proteins might emerge. It worked, and FoldIt players even contributed to a handful of new discoveries.

If we can mimic the pinnacle of intuition in Go, then why couldnt we map that across to proteins?

Hassabis played that game when he was a postdoc at MIT in his 20s. He was struck by the way basic human intuition could lead to real breakthroughs, whether making a move in Go or finding a new configuration in FoldIt.

I was thinking about what we had actually done with AlphaGo, says Hassabis. Wed mimicked the intuition of incredible Go masters. I thought, if we can mimic the pinnacle of intuition in Go, then why couldnt we map that across to proteins?

The two problems werent so different, in a way. Like Go, protein folding is a problem with such vast combinatorial complexity that brute-force computational methods are no match. Another thing Go and protein folding have in common is the availability of lots of data about how the problem could be solved. AlphaGo used an endless history of its own past games; AlphaFold used existing protein structures from the Protein Data Bank, an international database of solved structures that biologists have been adding to for decades.

AlphaFold2 uses attention networks, a standard deep-learning technique that lets an AI focus on specific parts of its input data. This tech underpins language models like GPT-3, where it directs the neural network to relevant words in a sentence. Similarly, AlphaFold2 is directed to relevant amino acids in a sequence, such as pairs that might sit together in a folded structure. They wiped the floor with the CASP competition by bringing together all these things biologists have been pushing toward for decades and then just acing the AI, says Stevens.

Over the past year, AlphaFold2 has started having an impact. DeepMind has published a detailed description of how the system works and released the source code. It has also set up a public database with the European Bioinformatics Institute that it is filling with new protein structures as the AI predicts them. The database currently has around 800,000 entries, and DeepMind says it will add more than 100 millionnearly every protein known to sciencein the next year.

A lot of researchers still dont fully grasp what DeepMind has done, says Charlotte Deane, chief scientist at Exscientia, an AI drug discovery company based in the UK, and head of the protein informatics lab at the University of Oxford. Deane was also one of the reviewers of the paper that DeepMind published on AlphaFold in the scientific journal Nature last year. Its changed the questions you can ask, she says.

View original post here:
This is the reason Demis Hassabis started DeepMind - MIT Technology Review