AI 101: All the Ways AI Could Improve or End Our World – Interesting Engineering
We are as gods and might as well get good at it. Stewart Brand, 1968
In December 2017, AlphaZero, a chess-playing, artificial intelligence (AI) developed by Google, defeated Stockfish 8, the reigning world champion program at that time. AlphaZero calculates around 80,000 moves per second, according to The Guardian. Stockfish? 70 million.
Yet, out of 100 matches, AlphaZero won 28 and tied 72.
Stockfishs open-source algorithm has been continually tweaked by human input over the years. The New Yorker reports that coders suggestan idea to update the algorithm, and the two versions are then pitted against each other for thousands of matches to see which comes out on top.
Google claims that AlphaZeros machine learning algorithm had no human input beyond the programming of the basic rules of chess. This is a type of deep learning, wherein programs carry out complex tasks without human intervention or oversight. After being taught the basics of the game, AlphaZero was then set free to teach itself how to get better.
So, how quickly was the AI able to develop its algorithm well enough to beat one of the most advanced chess programs in the world?
Four hours.
It wasnt just the speed with which it machine-learned its way to chess mastery that amazed people, either. It was AlphaZeros, for lack of a better word, creativity. Writing in The Atlantic, historian Yuval Noah Harari, author of Sapiens: A Brief History of Mankind and Homo Deus: A Brief History of Tomorrow, notes that some of AlphaZeros strategies could even be described as genius.
Everything about AlphaZero is indicative of how fast and how acute the AI revolution is likely to be. Programs like this will essentially be doing the same kind of information processing our brains do except better far better with a breadth and depth that no biological system (including the human brain) could ever hope to compete with.
Debates about consciousness and free-will aside, these programs will undoubtedly possess intelligence by at least some definition of the word. Unconstrained by biology and with a human-like ability to learn and course-correct, the potential for change is so large that it may be impossible to comprehend, let alone predict.
Yet, we do have some ideas about where we may end up.
But to truly understand what that final finish line may look like, we first need to understand what artificial intelligence really is.
What is artificial intelligence? There is no single, universally-accepted definition of AI, meaning it can be easy to get lost in the philosophical and technical woods while trying to outline it. There are, however, a few key points that researchers agree are relevant to any definition.
The Stanford Encyclopedia of Philosophy notes that many scientists and philosophers have attempted to define AI through the concept of rationality, expressed in either machine thinking or behavior. A 2019 report released by the European Commission describes the basics of how AI programming achieves that rationality through perceiving the environment, interpreting the information found within it, and then deciding on the best course of action for a particular goal, potentially altering the environment in the process.
Experts at IBM and the Massachusetts Institute of Technology (MIT) founded the MIT-IBM Watson AI Lab in 2017 and offer useful perspectives on how to think of the technology. The labs name may be familiar to you; Watson was the program that beat out two human competitors to win on the game show Jeopardy back in 2011. The lab defines AI as enabling computers and machines to mimic the perception, learning, problem-solving, and decision-making capabilities of the human mind. This is an umbrella definition that does an excellent job of encapsulating the basic idea.
Importantly, the lab then distinguishes between three AI categories. "Narrow AI" is composed of algorithms that perform specific tasks at a daunting speed. Narrow AI encompasses much of the AI technologies in existence today voice assistance technology, translation services, and those chess programs mentioned above are all examples of this type of AI.
The Watson AI Lab aims to take AI two critical steps further. First to "broad AI", which is programming systems that are able to learn with greater flexibility. And eventually to "artificial general intelligence," which are systems capable of complex reasoning and full autonomy.
This last category would be something akin to the archetypal sci-fi version of autonomous machines.
For now, most AI technology remains in the narrow classification. By looking at the current progress of that narrow AI and its benefits and risks, we can see hints of what the future might bring.
Pulling back from the slightly esoteric nature of programs like AlphaZero, Stockfish, and Watson, we see how AIs reach currently extends into the life of the average person. Millions of people use programs like Siri and Alexa every day.Chatbots help consumers troubleshoot their problems, and foreign language students and travelers around the world rely on online translation services. When you do a simple Google search, human-tweaked algorithms carefully arrange what you see and what you dont.
A trip to the hospital or clinic could put you in close contact with AI, too. In 2019, Harvard University reported that, while the majority of current medical AI applications deal in simple numerical or image-based data, such as analyzing blood pressure or MRIs, the technology is advancing to impact health in much larger ways. For example, researchers from Seoul National University Hospital and College of Medicine have developed an algorithm capable of detecting abnormalities in cell growth, including cancers. When stacked against the performance of 18 actual doctors, the algorithm outperformed 17 of them.
Meteorology is getting a boost from AI as well. A collaboration between Microsoft and the University of Washington has resulted in a weather prediction model that uses nearly 7,000 times less computing power than traditional models to generate forecasts. While those forecasts were less accurate than the most advanced models currently in use, this work represents an important step forward in cutting down the time and energy it takes to create weather and climate models, which could someday save lives.
The farming industry is another area that would benefit greatly from the development of such weather-predicting AI. And those working in agriculture are just as busy incorporating the technology into much of what they do.
Forbes reports that, by 2025, investment in smart technology for agricultural use will reach over $15 billion. This AI is starting to transform the field, bettering crop yields and lowering production costs. Coupled with drones and field-based sensors, AI is helping to generate completely new information pools the sector has never had access to before, allowing farmers to better analyze fertilizer effectiveness, improve pest management, and monitor the health of livestock.
Machine learning is even being used to create systems that mimic human characteristics like humor. In 2019, Wired reported on researchers who designed an AI capable of creating puns. It may be possible in the near future to shoot the breeze with a linguistically sharp Siri or Alexa, trading wordplay as you go. You know what they say about an eye for an AI.
All of this is exciting. Despite the game-changing levels of hope and optimism that AI is ushering in for humanitys future, however, there are unavoidable conversations regarding the dangers it could pose as well.
The risks associated with using AI are many. Its important to understand that, however bright AI could potentially make the future, it could also be used to bring about practices that would be perfectly at home in an Orwellian or Huxleyan context.
In nearly every field in which AI is being applied, important ethical questions are being raised. Critically, whatever problems AI exhibits in the future are likely to be reflections and extensions of the humans behind it.
Unfortunately, a look at those simple, everyday Google searches shows how human input can drive machine learning for better or for worse. According to reporting by The Wall Street Journal in 2019, Googles algorithms are subject to regular tinkering from executives and engineers who are trying to deliver relevant search results, while also pleasing a wide variety of powerful interests and driving its parent companys more than $30 billion in annual profit. This raises concerns of who influences what billions of search engine users see on a daily basis and how that influence might change according to undisclosed agendas.
It could lead us to a kind of propaganda and social engineering that is terrifyingly effective.
And while its true that AI is revolutionizing the medical world in life-saving ways, the benefits it brings are, likewise, subject to serious pitfalls.
Writing in the journal AI & Society in 2020, Maximilian Kiener warns that machine learning is vulnerable to cyber attacks, data mismatching, and the biases of its programmers, at the very least. Kiener references a study in which, according to scans made by an AI algorithm, black women being tested for breast cancer exhibited a lower risk of potential mutations compared to white women, despite having a similar risk in reality.
Errors like this could potentially be fatal, and they could result in specific groups and classes of people who are unable to reap the benefits of modern medicine.
As AI integrates more and more with medical technology, the disclosure of such risks to patients is imperative.
Similarly, self-driving cars are not exempt from a host of sobering technical and ethical challenges. In 2018, a self-driving Uber car hit an Arizona pedestrian who later died at the hospital from her injuries. As NBC News reports, there was no malfunction in the cars AI programming it had been trained to recognize pedestrians only at crosswalks, not when jaywalking.
It seems like a minor oversight, but once they are fully integrated into our infrastructure, AI systems that are similarly "blind" could cause a catastrophic loss of life.
AI has also found its way into the wars of the world. Militaries engaged in this generations arms race are trying to perfect the technology inautomated weapons systems. While this could certainly bring about a lot of good in terms of reducing loss of life, the question of how comfortable humanity is with machine learning deciding who lives and who dies in certain situations is one that we are already facing right now.
And in other arenas, some governments and private security organizations have already utilized facial recognition software to frightening effect. Chinas use of technology to profile Uyghur people within its borders has been raising moral eyebrows for some time, for example.
As the journal Nature reported at the end of 2020, some researchers are beginning to push back against those in the academic community who have published papers on how to build facial recognition algorithms.
Amazon, one of the largest providers of facial recognition AI to both the US government and private security organizations in China, has faced much scrutiny about the technologys relationship with civil rights abuses. In June 2020, the MIT Technology Review reported that facing public backlash as well as pressure from the American Civil Liberties Union, the company decided to suspend sales of facial recognition technology for a one-year period following similar declarations by both IBM and Microsoft. According to the BBC, Amazon is waiting on Congress to implement new rules regarding the technologys use.
For now? There is little legislation governing where and how the technology is deployed, whether it be for catching suspected criminals o undocumented immigrants or monitoring where you shop, what you buy, and who you go to dinner with.
How we design and use AI also has a tangible effect on human psychology and social cohesion, particularly with respect to the kinds of information we are shown online.
The Wall Street Journal reported back in 2018 how YouTubes algorithms recommend high-traffic videos that are more likely to keep people on the site and watching. Whether by design or not, this frequently leads viewers to consume increasingly extreme content, even when those users havent shown interest in such content. Since it has been established that the internet is particularly prone to fostering the development of conspiracy theories, worries about how these algorithms play a part in societys troubles and radicalization may be well justified.
The social ramifications could go far deeper. Dirk Helbing is a professor of computational social science at ETHZurich whose research specializes in applying computer modeling and simulations to the phenomena of social coordination, conflict, and collective opinion formation. In the book Towards Digital Enlightenment: Essays on the Dark and Light Sides of the Digital Revolution, he, Bruce Frey of the University of Kansas, and seven other researchers write lucidly on how the relationship of coder and coded is becoming a two-way street.
Some software platforms are moving towards persuasive computing. In the future, using sophisticated manipulation technologies, these platforms will be able to steer us through entirecourses of action, be it for the execution of complex work processes or to generate free content for Internet platforms, from which corporations earn billions. The trend goes from programming computers to programming people.
Yuval Noah Harari suggests similarly disconcerting scenarios. While he warns that dystopian visions of malevolent leaders monitoring citizens biometrics and psyches with AI are a distinct possibility, it might not be the one we should be most worried about:
We are unlikely to face a rebellion of sentient machines in the coming decades, he writes in The Atlantic, but we might have to deal with hordes of bots that know how to press our emotional buttons better than our mother does and that use this uncanny ability, at the behest of a human elite, to try to sell us something be it a car, a politician, or an entire ideology.
Ultimately, it is impossible to list all the benefits and risks of artificial intelligence, as the technology already impacts nearly every aspect of our lives. From what we watch, to what we buy, to how we think, and what we know. Likewise, it is impossible to know exactly where we will end up. However, the multitude of options here highlight just how important this issue is, and they make one thing abundantly clear: The decisions we make todaydictate where we end up tomorrow, which is why it is so very important to go slow andnot"move fast and break things."
The realization of general AI, something that seems to be an inevitability at this point, could end up being humanitys greatest ever technological achievement or our demise. While delivering a speech for TedTalks in Alberta, Canada in 2016, neuroscientist and AI commentator Sam Harris emphasized just how important it is to get the initial conditions of that achievement right:
When youre talking about superintelligent AI that can make changes to itself, it seems to me that we only have one chance to get [it] right. The moment we admit that information processing is the source of intelligence, [...] and we admit that we will improve these systems continuously, and we admit that the horizon of cognition very likely far exceeds what we currently know then we have to admit that were in the process of building some sort of god. Now would be a good time to make sure its a god we can live with.
Continue reading here:
AI 101: All the Ways AI Could Improve or End Our World - Interesting Engineering
- Game Changer: AlphaZero's Groundbreaking Chess Strategies and the Promise of AI by Mathew Sadler and Natasha Regan - ChessBase India - December 18th, 2024 [December 18th, 2024]
- Demis Hassabis - when the chess prodigy won the Nobel Prize in Chemistry - Chess.com - October 14th, 2024 [October 14th, 2024]
- AI Could Learn a Thing or Two From Rat Brains - The Daily Beast - November 13th, 2023 [November 13th, 2023]
- Episode What sets great teams apart | Lane Shackleton (CPO of Coda) - Mirchi Plus - October 1st, 2023 [October 1st, 2023]
- The timeless charm of of 'Chaturanga' - Daily Pioneer - October 1st, 2023 [October 1st, 2023]
- Creating New Stories That Don't Suck - Hollywood in Toto - October 1st, 2023 [October 1st, 2023]
- AI Agents: Adapting to the Future of Software Development - ReadWrite - October 1st, 2023 [October 1st, 2023]
- The Race for AGI: Approaches of Big Tech Giants - Fagen wasanni - July 30th, 2023 [July 30th, 2023]
- Book Review: Re-engineering the Chess Classics by GM Matthew ... - Chess.com - June 4th, 2023 [June 4th, 2023]
- The Sparrow Effect: How DeepMind is Rewriting the AI Script - CityLife - June 4th, 2023 [June 4th, 2023]
- Vitalik Buterin Exclusive Interview: Longevity, AI and More - Lifespan.io News - June 4th, 2023 [June 4th, 2023]
- How to play chess against ChatGPT (and why you probably shouldn't) - Android Authority - May 29th, 2023 [May 29th, 2023]
- Weekend Movers - Conflux (CFX) and Klaytn (KLAY) - Securities.io - May 16th, 2023 [May 16th, 2023]
- How technology reinvented chess as a global social network - Financial Times - May 8th, 2023 [May 8th, 2023]
- Our moral panic over AI - The Spectator Australia - April 13th, 2023 [April 13th, 2023]
- Liability Considerations for Superhuman (and - Fenwick & West LLP - April 13th, 2023 [April 13th, 2023]
- Aston by-election minus one day The Poll Bludger - The Poll Bludger - April 2nd, 2023 [April 2nd, 2023]
- No-Castling Masters: Kramnik and Caruana will play in Dortmund - ChessBase - March 26th, 2023 [March 26th, 2023]
- AI is teamwork Bits&Chips - Bits&Chips - March 20th, 2023 [March 20th, 2023]
- Resolve Strategic nuclear subs poll (open thread) The Poll Bludger - The Poll Bludger - March 20th, 2023 [March 20th, 2023]
- How AlphaZero Learns Chess - Chess.com - February 24th, 2023 [February 24th, 2023]
- AI Topic: AlphaZero, ChatGPT, Bard, Stable Diffusion and more! - February 24th, 2023 [February 24th, 2023]
- AlphaZero Tackles Chess Variants - by Dennis Monokroussos - February 20th, 2023 [February 20th, 2023]
- AlphaZero Vs. Stockfish 8 | AI Is Conquering Computer Chess - February 10th, 2023 [February 10th, 2023]
- Stockfish (chess) - Wikipedia - November 22nd, 2022 [November 22nd, 2022]
- AlphaZero Chess Engine: The Ultimate Guide - October 14th, 2022 [October 14th, 2022]
- Whos going to save us from bad AI? - MIT Technology Review - October 14th, 2022 [October 14th, 2022]
- DeepMinds game-playing AI has beaten a 50-year-old record in computer science - MIT Technology Review - October 8th, 2022 [October 8th, 2022]
- The Download: TikTok moral panics, and DeepMinds record-breaking AI - MIT Technology Review - October 8th, 2022 [October 8th, 2022]
- Top 5 stories of the week: DeepMind and OpenAI advancements, Intels plan for GPUs, Microsofts zero-day flaws - VentureBeat - October 8th, 2022 [October 8th, 2022]
- Taxing times (open thread) The Poll Bludger - The Poll Bludger - October 8th, 2022 [October 8th, 2022]
- AlphaGo Zero Explained In One Diagram | by David Foster - Medium - October 1st, 2022 [October 1st, 2022]
- A chess scandal brings fresh attention to computers role in the game - The Record by Recorded Future - October 1st, 2022 [October 1st, 2022]
- Meta AI Boss: current AI methods will never lead to true intelligence - Gizchina.com - October 1st, 2022 [October 1st, 2022]
- Meta's AI guru LeCun: Most of today's AI approaches will never lead to true intelligence - ZDNet - September 24th, 2022 [September 24th, 2022]
- Stockfish - Chess Engines - Chess.com - September 9th, 2022 [September 9th, 2022]
- DeepMinds AlphaFold could be the future of science and AI - Vox.com - August 7th, 2022 [August 7th, 2022]
- Correspondence chess server, Go (weiqi) games online - FICGS - July 4th, 2022 [July 4th, 2022]
- Chennai Chess Olympiad and AI - Analytics India Magazine - June 28th, 2022 [June 28th, 2022]
- Yann LeCun has a bold new vision for the future of AI - MIT Technology Review - June 28th, 2022 [June 28th, 2022]
- Special Street Fighter 35th anniversary website launched, features impressive timeline of game release dates over the years - EventHubs - June 28th, 2022 [June 28th, 2022]
- The Nightmarish Frontier of AI in Chess - uschess.org - June 19th, 2022 [June 19th, 2022]
- Four Draws in Round Three of 2022 Candidates | US Chess.org - uschess.org - June 19th, 2022 [June 19th, 2022]
- Part 1: A Realistic Framing Of The Progress In Artificial Intelligence - Investing.com UK - June 19th, 2022 [June 19th, 2022]
- Who Will Win The Candidates: The Case For Each Player - Chess.com - June 13th, 2022 [June 13th, 2022]
- A tale of two universities and two engines - Chess News - March 22nd, 2022 [March 22nd, 2022]
- AlphaZero (And Other!) Chess Variants Now Available For Everyone - Chess.com - March 20th, 2022 [March 20th, 2022]
- How AI is impacting the video game industry - ZME Science - December 17th, 2021 [December 17th, 2021]
- Q&A: How Speechmatics is leading the way in tackling AI bias and improving inclusion - Information Age - November 4th, 2021 [November 4th, 2021]
- AlphaGo | DeepMind - October 22nd, 2021 [October 22nd, 2021]
- Leela Zero - Wikipedia - October 22nd, 2021 [October 22nd, 2021]
- Leela Chess Zero - Wikipedia - October 22nd, 2021 [October 22nd, 2021]
- How AI is reinventing what computers are - MIT Technology Review - October 22nd, 2021 [October 22nd, 2021]
- graphneural.network - Spektral - October 12th, 2021 [October 12th, 2021]
- MuZero - Wikipedia - October 12th, 2021 [October 12th, 2021]
- Bin Yu - October 12th, 2021 [October 12th, 2021]
- A general reinforcement learning algorithm that masters ... - August 29th, 2021 [August 29th, 2021]
- What would it be like to be a conscious AI? We might never know. - MIT Technology Review - August 29th, 2021 [August 29th, 2021]
- AlphaZero to analyse no-castling match of the champions - Chessbase News - July 13th, 2021 [July 13th, 2021]
- How This Startup Aims to Disrupt Copywriting Forever - Inc. - June 6th, 2021 [June 6th, 2021]
- Between Games and Apocalyptic Robots: Considering Near-Term Societal Risks of Reinforcement - Medium - April 17th, 2021 [April 17th, 2021]
- Trapping the queen - Chessbase News - April 17th, 2021 [April 17th, 2021]
- Quick Scripts AlphaZero - February 17th, 2021 [February 17th, 2021]
- How to Kickstart an AI Venture Without Proprietary Data - Medium - February 17th, 2021 [February 17th, 2021]
- Street Fighter V: What to Expect After the Winter Update | CBR - CBR - Comic Book Resources - February 17th, 2021 [February 17th, 2021]
- This AI chess engine aims to help human players rather than defeat them - The Next Web - February 1st, 2021 [February 1st, 2021]
- Open source at Facebook: 700 repositories and 1.3 million followers - ZDNet - February 1st, 2021 [February 1st, 2021]
- Scientists say dropping acid can help with social anxiety and alcoholism - The Next Web - February 1st, 2021 [February 1st, 2021]
- AlphaZero - Chess Engines - Chess.com - November 21st, 2020 [November 21st, 2020]
- AlphaZero: Shedding new light on chess, shogi, and Go ... - November 21st, 2020 [November 21st, 2020]
- The art of chess: a brief history of the World Championship - TheArticle - November 21st, 2020 [November 21st, 2020]
- Podcast: Can you teach a machine to think? - MIT Technology Review - November 15th, 2020 [November 15th, 2020]
- Retired Chess Grandmaster, AlphaZero AI Reinvent Chess - Science Times - September 17th, 2020 [September 17th, 2020]
- DeepMind's AI is helping to re-write the rules of chess - ZDNet - September 17th, 2020 [September 17th, 2020]
- AI messed up mentally stimulating games. Right now it is actually creating the video game wonderful once again - Publicist Recorder - September 17th, 2020 [September 17th, 2020]
- A|I: The AI Times Surveillance mandated - BetaKit - September 17th, 2020 [September 17th, 2020]
- Starting on Friday: Chess 9LX with Carlsen and Kasparov - Chessbase News - September 17th, 2020 [September 17th, 2020]
- AlphaZero Match Will Be Replicated In Computer Chess Champs - Chess.com - August 3rd, 2020 [August 3rd, 2020]
- Facebook's New Algorithm Can Play Poker And Beat Humans At It - Digital Information World - August 3rd, 2020 [August 3rd, 2020]
- Survival of the Fattest: Macheide and Superman - TheArticle - August 3rd, 2020 [August 3rd, 2020]