Archive for the ‘Alphago’ Category

How to Strengthen America’s Artificial Intelligence Innovation – The National Interest

Rapidly developing artificial intelligence (AI) technology is becoming increasingly critical for innovation and economic growth. To secure American leadership and competitiveness in this emerging field, policymakers should create an innovation-friendly environment for AI research. To do so, federal authorities should identify ways to engage the private sector and research institutions.

The National AI Research and Development (R&D) Strategic Plan, which will soon be updated by the Office of Science and Technology Policy (OSTP) and the National Science and Technology Council (NSTC), presents such an opportunity. However, the AI Strategic Plan needs several updates to allow the private sector and academic institutions to become more involved in developing AI technologies.

First, the OSTP should propose the creation of a federal AI regulatory sandbox to allow companies and research institutions to test innovative AI systems for a limited time. An AI sandbox would not only benefit consumers and participating companies; it would also enable regulators to gain first-hand insights into emerging AI systems and help craft market-friendly regulatory frameworks and technical standards. Regulators could also create sandbox programs to target innovation on specific issuessuch as human-machine interaction and probabilistic reasoningthat the AI Strategic Plan identifies as priority areas in need of further research.

Second, the updated AI strategy should outline concrete steps to publish high-quality data sets using the vast amount of non-sensitive and non-personally identifiable data that the federal government possesses. AI developers need high-quality data sets on which AI systems can be trained, but the lack of access to these data sets remains a significant challenge for developing novel AI technologies, especially for startups and businesses without the resources of big tech companies. The costs associated with creating, cleaning, and preparing such data sets are too high for many businesses and academic institutions. For example, AlphaGo, a software produced by Google subsidiary DeepMind, made headlines in March 2016 when it defeated the human champion of a Chinese strategy game. More than $25 million was spent on hardware alone to train data sets for this program.

Recognizing this challenge, the AI Strategic Plan recommended the development of shared public data sets, but progress in this area appears to be slow. Under the 1974 Privacy Act, the U.S. government has not created a central data repository, which is important due to the privacy and cybersecurity risks that such a repository of sensitive information would pose. However, different U.S. agencies have created a wide range of non-personally identifiable and non-sensitive data sets intended for public use. Two notable examples are the National Oceanic and Atmospheric Administrations climate data and NASAs non-confidential space-related data. Making such data readily available to the public can promote AI innovation in weather forecasting, transportation, astronomy, and other underexplored subjects.

Therefore, the AI strategy should propose a framework that enables the OSTP and the NSTC to work with government agencies in order to ensure that non-sensitive and non-personally identifiable dataintended for public useare made available in a format suitable for AI research by the private sector and research institutions. To that end, the OSTP and the NSTC could use the federal governments existing FedRAMP classification of different data types to decide which data should be included in such data sets.

Finally, the AI Strategic Plan would benefit from a closer examination of other countries AI R&D strategies. While policymakers should exercise caution in making international comparisons, awareness of these broader trends can help the United States capitalize on different countries successes and avoid their regulatory mistakes. For example, the British and French governments recently spearheaded initiatives to promote high-level interdisciplinary AI research in multiple disciplines. Likewise, the Chinese government has launched similar initiatives to encourage cross-disciplinary academic research at the intersection of artificial intelligence, economics, psychology, and other disciplines. Studying and evaluating other countries approaches could provide American policymakers insights into which existing R&D resources should be devoted to interdisciplinary AI projects.

To maximize the benefit of this comparative approach, the AI Strategic Plan should propose mechanisms to conduct annual reviews of the global AI research and regulatory landscape andevaluations of its successes and failures.

Ultimately, due to AIs general-purpose nature and its diffusion across the economy, the AI Strategic Plan should focus on enabling a wide range of actors, from startups to academic and financial institutions, to play a role in strengthening American AI innovation. An innovation-friendly research environment and an adaptable, light-touch regulatory approach are vital to secure Americas global economic competitiveness and technological innovation in artificial intelligence.

Ryan Nabil is a Research Fellow at the Competitive Enterprise Institute in Washington, DC.

Image: Flickr/U.S. Air Force.

Original post:
How to Strengthen America's Artificial Intelligence Innovation - The National Interest

Why it’s time to address the ethical dilemmas of artificial intelligence – Economic Times

The Future of Life Institute (FLI) was founded in March 2014 by eminent futurologists and researchers to reduce catastrophic and existential risks to humankind from advanced technologies like artificial intelligence (AI). Elon Musk, who is on FLI's advisory board, donated $10 million to jump-start research on AI safety because, in his words, 'with artificial intelligence, we are summoning the devil'. For something that everyone is singing hosannas to these days, and treating as a solution to almost all challenges faced by industry or healthcare or education, why this cautionary tale?

AI's perceived risk isn't only from autonomous weapon systems that countries like the US, China, Israel and Turkey produce that can track and target humans and assets without human intervention. It's equally about the deployment of AI and such technologies for mass surveillance, adverse health interventions, contentious arrests and the infringement of fundamental rights. Not to mention about the vulnerabilities that dominant governments and businesses can insidiously create.

AI came into global focus in 1997 when IBM's Deep Blue beat world chess champion Garry Kasparov. We came to accept that the outcome was inevitable, considering it was a game based on logic. And that the ability of the computer to reference past games, figure options and select the most effective move instantly, is superior to what humans could ever do. When Google DeepMind's AlphaGo program bested the world's best Go player Lee Sedol in 2016, we learnt that AI could easily master games based on intuition too.

AI, AI, SirAs the United Nations Educational, Scientific and Cultural Organisation (Unesco) sharpened the focus in recognising the ethical dilemmas that AI could create, it has embarked on developing a legal, global document on the subject. Situations discussed include how a search engine can become an echo chamber upholding real-life biases and prejudices - like when we search for the 'greatest leaders of all time', and get a list of only male personalities. Or the quandary when a car brakes to avoid a jaywalker and shifts the risk from the pedestrian to the travellers in the car. Or when AI is exploited to study 346 Rembrandt paintings pixel by pixel, leveraging deep-learning algorithms to produce a magnificent, 3D-printed masterpiece that could deceive the best art experts and connoisseurs.

Then there is the AI-aided application of justice in legislation, administration, adjudication and arbitration. Unesco's quest to provide an ethical framework to ensure emerging technologies benefit humanity at large is, indeed, a noble one.

Interestingly, computer scientists at the Vienna University of Technology (TU Wein), Austria, are studying Indian Vedic texts, and applying them to mathematical logic. The idea is to develop reasoning tools to address deontic - relating to duty and obligation - concepts like prohibitions and commitments, to implement ethics in AI.

Logicians at the Institute of Logic and Computation at TU Wein and the Austrian Academy of Science are also gleaning the Mimamsa, which interprets the Vedas and suggests how to maintain harmony in the world, to resolve many innate contradictions. Essentially, as classical logic is less useful when dealing with ethics, deontic logic needs to be developed that can be expressed in mathematical formulae, creating a framework that computers can comprehend and respond to.

Isaac Asimov's iconic 1950 book, I, Robot, sets out the three rules all robots must be programmed with: the Three Laws of Robotics - 1. To never harm a human or allow a human to come to harm. 2. To obey humans unless this violates the first law. 3. To protect its own existence unless this violates the first or second laws. In the 2004 film adaptation, a larger threat is envisaged - when AI-enabled robots rebel and try to enslave and control all humans, to protect humanity for its own good, by their dialectic.

Artificially RealIn the real world, there is little doubt that AI has to be mobilised for the greater good, guided by the right human intention, so that it can be leveraged to control larger forces of nature like climate change and natural disasters that we can't otherwise manage. AI must be a means to nourish humanity in multifarious ways, rather than unobtrusively aid its destruction. It is obvious that the Three Laws of Robotics must be augmented, so that expanded algorithms help the AI engine respect privacy, and not discriminate in terms of race, gender, age, colour, wealth, religion, power or politics.

We're seeing the mainstreaming of AI in an age of exponential digital transformation. How we figure its future will shape the next stage of human evolution. The time is opportune for governments to confabulate - to shape equitable outcomes, a risk management strategy and pre-emptive contingency plans.

More here:
Why it's time to address the ethical dilemmas of artificial intelligence - Economic Times

About – Deepmind

The DeepMind Academic Fellowship Program provides an opportunity for early-career researchers in the fields of Computer Science and Artificial Intelligence to pursue postdoctoral study and build the experiences and research profile that will enable them to progress to full academic or other research leadership roles in future.

Alongside financial support, DeepMind provides opportunities for fellows to be mentored by senior DeepMind researchers. DeepMind will not direct their research and fellows are free to pursue any research direction they wish.

Fellowships are open to early-career researchers who have completed a PhD in Machine Learning, Computer Science, Statistics or another relevant field by the time they start their postdoc. We particularly encourage candidates who identify as Black to apply because this group is currently underrepresented in AI research.

DeepMind has partnered with University of Cambridge, University College London and Queen Mary University of London to launch the Fellowship program in 2021. Application for the 2nd cohort of the program is expected to open later in 2022.

Read more from the original source:
About - Deepmind

Experts believe a neuro-symbolic approach to be the next big thing in AI. Does it live up to the claims? – Analytics India Magazine

In their 2009 manifesto, Neural-Symbolic Cognitive Reasoning, Artur Garcez and Luis Lamb, discussed the 1990s popular idea of integrating neural networks and symbolic knowledge. They cited Towell and Shavliks KBANN, Knowledge-Based Artificial Neural Network that uses a system to insert rules, refine and extract data from neutral network; a model empirically proving to be effective. Industry leaders, including contingencies at IBM, Intel, Google, Facebook, and Microsoft, and researchers like Josh Tenenbaum, Anima Anandkumar, and Yejin Choi, are starting to apply this technique in 2022. The recent AI developments, challenges, and stagnancy is leading the industry to consider this hybrid approach to AI modelling.

Neuro-Symbolic AI is essentially a hybrid AI leveraging deep learning neural network architectures and combining them with symbolic reasoning techniques. For example, we have been using neural networks to identify the shape or colour a particular object has. Applying symbolic reasoning to it can take it a step further to tell more exciting properties about the object, such as the area of the object, volume and so on.

AI has been the talk of the town for more than a decade, and while it has stood true to several promises, the majority of the claims are still to be met, and the challenges connected to AI have only been increasing. In the past year, GPT-3 has asked individuals to commit suicide, Alexa has challenged a ten-year-old to touch a coin to the plug, and Facebooks algorithm has identified a man of colour as a primate. This is not any different from Microsofts Tay asserting Hitler was right or Ubers self-driving cars crossing red lights a few years ago. With every GPT-3 development, we have seen a downfall. The present efforts to ensure explainable, fair, ethical and efficient AI need to be supported by changes in how we approach artificial intelligence.

Scientist and AI author and entrepreneur Gary Marcus recently wrote about deep learning hitting a wall and the responsibility of AGI. It must be like stainless steel, stronger and more reliable and, for that matter, easier to work with than any of its constituent parts. No single AI approach will ever be enough on its own; we must master the art of putting diverse approaches together if we are to have any hope at all.

Since AI use learning and reasoning in a quest to be like humans, the neuro-symbolic approach allows us to combine these strengths to make inferences based on the existing neural networks and learn through symbolic representations. Knowledgeable Magazine asserted this hybrid to show duckling-like abilities. Ducklings can imprint colours and shapes and differentiate between them. Moreover, they can differentiate between same and different, an aspect AI still struggles with. Here, symbolic AI would involve symbols for physical objects and colours in its knowledge base. The base also consists of general rules to differentiate between them. This, combined with the deep nets, allows the model to be more efficient.

The combinations demand humans supply a knowledge base/symbolic rules for the AI to leverage while the automated deep nets find the correct answers. The hybrid uses deep nets, instead of humans, to generate only those portions of the knowledge base that it needs to answer a given question, notes Knowledgeable Magazine.

DeepMind has seen some of the best success with their board-game playing AI models Go, Chess, MuZero and more. These are hybrid models using symbolic AI. For instance, AlphaGo used symbolic-tree search with deep learning, AlphaFold2 combines symbolic ways of representing the 3-D physical structure of molecules with the data-trawling characteristics of deep learning. Deepmind has asserted the qualities of symbolic learning in AI in a recent blog post. This approach will allow for AI to interpret something as symbolic on its own rather than simply manipulate things that are only symbols to human onlookers, they said. This allows for AI with human-like fluency. IBM has asserted neuro- symbolic AI is getting AI to reason. The LNN technique, Logical Neural Network, was introduced and created on the foundations of deep learning and symbolic AI. Given the combination, the software can successfully answer complex questions with minimal domain-specific training.

Two major conferences have been held asserting this needthe 2019 Montreal AI Debate between Yoshua Bengio and Gary Marcus and the AAAI-2020 fireside conversation with Laureate Daniel Kahneman, Geoffrey Hinton, Yoshua Bengio and Yann LeCun. The key takeaway from these events was the need for AI to have a reasoning layer with deep learning to frame a rich future of AI.

Excerpt from:
Experts believe a neuro-symbolic approach to be the next big thing in AI. Does it live up to the claims? - Analytics India Magazine

Measuring Attention In Science And Technology – Forbes

Soccer fans cheering

Most people alive today who are not in science or medicine will not be able to quickly recall just ten biologists or ten chemists and their discoveries. Most of them can have a happy life without that knowledge. But may will easily recall ten popular sports people, singers, actors, and even the members of the Kardashian family. Some will recall celebrity scientists from the past including Einstein, Newton, Tesla. Celebrity scientists nowadays are very rare. In this long article I would like to explore some of the reasons why there is a lack of celebrity scientists, why do we need more of them, and how to track the popularity of academic papers.

circa 1955: Mathematical physicist Albert Einstein (1879 - 1955) delivers one of his recorded ... [+] lectures. (Photo by Keystone/Getty Images)

Have you ever heard someone being told that they have a memory like a goldfish? Ever wondered what this expression even means? It means that they tend to forget something rather quickly or that they have a short memory span. But the truth is that this expression is a myth. Goldfish do not have a 3-second memory span like we often hear. At least not according to this article by Live Science, which claims that some goldfish have a memory span of 2 seconds and others, 10 seconds but its always short. But humans are not goldfish. We have a capacity to retain various types of information for different lengths of time. Short-term memories last seconds to hours, while long-term memories last for many years. According to this article by National Geographic, we also have a working memory that allows us to keep something in our minds for a limited time by repeating it. Whenever you read an academic article over and over again to memorize it, youre using your working memory. Another way to categorize memories is by the subject of the memory itself. You tend to remember things you most deeply and passionately care about. Everyone cares about different things and so they retain information differently. I believe the most important type of information that we must retain is in scientific knowledge and discoveries. It is sometimes frustrating to see people forget about some of the most famous modern-day scientists in the world while choosing to remember certain celebrities who are famous for being famous.

NEW YORK, NY - FEBRUARY 11: (L-R) Khloe Kardashian, Lamar Odom, Kris Jenner, Kendall Jenner, ... [+] Kourtney Kardashian, Kanye West, Kim Kardashian, Caitlin Jenner and Kylie Jenner attend Kanye West Yeezy Season 3 on February 11, 2016 in New York City. (Photo by Jamie McCarthy/Getty Images for Yeezy Season 3)

For example, you may constantly hear about the Kardashians on the news or on social media, but when was the last time you heard about Rosalind Franklin? Or John Ray? Or John Logie Baird? Rosalinds work was instrumental in the understanding of the molecular structures of DNA; John Ray published important works on botany, zoology, and natural theology; and John Baird demonstrated the world's first live working television system. The work done by the former two is crucial to our understanding of humans, whereas the invention of the latter is something that we rely on to spread information. The truth is that while theres nothing wrong with seeing celebrities dominating the news cycle in todays day and age, we must also not forget those who have contributed so much to science.

CEO of Tesla Motors and SpaceX, Elon Musk speaks to a crowd of people who had gathered to ... [+] participate in Tesla Motors test drives outside the Texas Capitol building in Austin on January 15th, 2015. Tesla manufactures and sells electric cars. (Photo by Robert Daemmrich Photography Inc/Corbis via Getty Images)

It was not until Elon Musk entered the game and started promoting hardcore science in the consumer space that made him into the first celebrity-scientist-entrepreneur that smart kids put on their walls as a role model. Still, very few people know about Elons early successes like Zip2 and X.com. Even today, most people know about Elon because of SpaceX and Tesla. Neil deGrasse Tyson, a highly accomplished astrophysicist, gained more attention after he popularized science with such books as The Pluto Files and through his frequent appearances on television. Bill Nye had to host the science television show Bill Nye the Science Guy and the Netflix show Bill Nye Saves the World to get famous. Few people know about his career as a mechanical engineer for Boeing. Fewer know about his many patents.

Lee Se-Dol (C), one of the greatest modern players of the ancient board game Go, arrives before the ... [+] third game of the Google DeepMind Challenge Match against Google-developed supercomputer AlphaGo at a hotel in Seoul on March 12, 2016. / AFP / JUNG Yeon-Je (Photo credit should read JUNG YEON-JE/AFP via Getty Images)

When it comes to companies, DeepMind, a subsidiary of Alphabet that builds advanced AI, managed to achieve greatness and fame by combining science with carefully-crafted PR. In 2017, DeepMind released a documentary called AlphaGo: The Movie, which helped inspire a generation of scientists as well as the general public. The documentary is about a five-game Go match between world champion Lee Sedol and AlphaGo, a computer program that combines advanced search tree with deep neural networks. These neural networks take a description of the Go board as an input and processes it through a number of different network layers containing millions of neuron-like connections (Spoiler alert: The AI wins). Not only did the documentary receive multiple awards and nominations, but it also inspired people to work on AI.

Altmetric page of an academic paper

Very often, we see truly amazing scientific achievements published in peer-reviewed journals, conferences, or on pre-print servers. These are largely ignored by the mainstream public. To help measure attention beyond academia, Nature Publishing Group adopted a score called Altmetric.

According to its website, Altmetric are metrics and qualitative data that are complementary to traditional, citation-based metrics. They can include peer reviews on Faculty of 1000, citations on Wikipedia and in public policy documents, discussions on research blogs, mainstream media coverage, bookmarks on reference managers like Mendeley, and mentions on social networks such as Twitter. Sourced from the internet, Altmetric can tell about how often journal articles and other scholarly outputs like datasets are discussed and used around the world. Altmetric has been incorporated into researchers websites, institutional repositories, journal websites, and more.

Altmetric - the measure of attention of academic papers

In order to track online attention for a specific piece of research, Altmetric requires the following three things: An output (journal, article, dataset, etc), an identifier attached to the output (DOI, RePEc, etc), and mentions in a source that it tracks.

Once it tracks a mention of the research, Altmetric collates it together with any other online attention it has seen for that item. It then displays it via the Altmetric details page, along with its unique donut and automatically calculated Altmetric Attention Score. Altmetric also includes the technological capacity to track items according to their unique Unform Research Identifier. So far, Altmetric has worked with institutions, funders, think-tanks and publishers to track attention to their press releases, grey literature and company reports.

The Nature Index is another great database of author affiliations and institutional relationships. It tracks roughly 60,000 research articles per year from 82 high-quality natural science journals and provides absolute and fractional counts of article publication at the institutional and national level. The Nature Index was conceived by Nature Research, one of the worlds leading multidisciplinary science journals. In total, more than 10,000 institutions are listed in the Nature Index.

Here's how the Nature Index works.

According to its website, the Nature Index uses article count (called Count) and fractional count (called Share) to track research output. A country/region or an institution is given a Count of 1 for each article that has at least one author from that country/region or institution. This is the case regardless of the number of authors an article has, and it means that the same article can contribute to the Count of multiple countries/regions or institutions.

To look at a countrys, a regions or an institutions contribution to an article, and to ensure they are not counted more than once, the Nature Index uses a fractional count, referred to as Share, which takes into account the share of authorship on each article. The total Share available per article is 1, which is distributed among all authors under the assumption that each contributed equally. For instance, an article with 10 authors means that each author receives a Share of 0.1. For authors who are affiliated with more than one institution, the authors Share is then split equally between each institution. The total Share for an institution is calculated by summing the Share for individual affiliated authors. The process is similar for countries/regions.

Other publishers have tried to copy their model but Altmetric remains the de-facto standard. And some of the concepts of Altmetric are integrated into many other places. For example, at Insilico Medicine in PandaOmics we track the industry's attention to protein target using a tool called TargetMetric, which also presents a flower composed of the various sources of attention. It looks like Altmetric inspired many companies to represent attention in the flower format.

While very often more conservative scientists criticize scientific PR and optimization of the papers for higher Altmetric and often sensational nature of the announcements that help achieve higher Altmetric Attention Scores, it is extremely important to understand that the kids who are growing up today live in a new world of technology that is optimized to grab their attention and compete for their eyeballs. So it is important to compete not with the other scientists but with non-value-adding celebrities and noise and get the attention back into the real world of science. Like I said in the beginning of this article, humans are not goldfish; we have a capacity to retain various types of information for different lengths of time. We should use this capacity to learn more about science and the minds behind recent scientific developments.

Since inception Altmetric fascinated me. I think that it is our duty as scientists to popularize the scientific achievements of our own teams and the achievements of others and inspire the people to get into biomedicine. Tracking attention for scientific papers is not an easy task and Nature did not disappoint. Other publishers tried to launch similar systems that pale in comparison in terms of the quality, scale, and presentation. Hence, I was very pleased when Katherine Christian, CEO of Altmetric, agreed to answer a couple of my questions.

Alex: In your opinion, what the scientists can do to increase the popularity of their articles?

Kathy Christian: We don't advocate increasing the popularity of research simply to increase the Altmetric Attention Score. When it comes to research attention, we encourage researchers to think about the aim of their research - what outcomes are they looking for - and to focus on the specific attention sources (e.g. policy, news, twitter) that are going to be most helpful in achieving those outcomes. For example, a researcher working on new ways for governments to reduce carbon emissions is most likely going to focus on attention in policy documents, whereas a researcher working on effective management of diabetes may be more focused on twitter and facebook with the aim of reaching patients. One tip we suggest is to search for research in your field that has been successful in reaching the audiences that you're keen to reach and review their attention profile to help develop your communication strategy. For some more detailed tips we have a blog posts on this topic, 'A quick and dirty guide to building your online reputation' and a short guide; '10 clever tips for promoting your research online' .

Alex: Are you planning to add any additional sources to Altmetric including TikTok and other social media? It looks like these channels are picking up in popularity.

Kathy Christian: We are continually evaluating where people are speaking about research in an effort to increase the diversity and coverage of our data; some attention sources are more challenging to track as they either do not link back to research or the platforms are closed. TikTok is a good example of an emerging source for research discussion that is also difficult to track. The technology required to detect mentions within a purely audiovisual space, such as TikTok, is still in its early stages, so collecting attention with a high enough level of confidence would be incredibly challenging. While YouTube is also an audiovisual space, in that case we're able to rely on the 'description' section and search for links back to research outputs.

Screenshot of Altmetric Top 100 - 2018-2020

To highlight the most popular research papers, every year Altmetric publishes the Top 100 lists. There is a blog post, press release, and a podcast accompanying the annual release. And if you want to analyze the trends by yourself, there is also an open ready-to-crunch dataset.

Unsurprisingly, 2020 list was dominated by the COVID-19 papers. But if you look deeper at the most popular papers, these are either the papers written by celebrities, or papers covering popular dinner-table topics such as diet, fake news, and climate change. The exciting biology and chemistry papers are generally lagging behind.

See more here:
Measuring Attention In Science And Technology - Forbes