What’s Behind the Race to Create Artificial General Intelligence? – Truthdig
Zuade Kaufman: Hello Im Zuade Kaufman, publisher of Truthdig.
As you know, at Truthdig, we dig beneath the headlines to find thought-provoking, forward-thinking ideas and conversations.
We are thrilled today to present two influential and brilliant thinkers,
Dr. Emile P. Torres and Dr. Timnit Gebru, who will be discussing the timely and important question, Whats Behind the Race to Create Artificial General Intelligence? which is also the title of todays event.
During this discussion, they will provide an overview of the bundle of ideologies known as TESCREAL, an acronym they coined while Emile was curating the Dig titled Eugenics in the Twenty-First Century: New Names, Old Ideas, which can be found on the Truthdig website.
TESCREAL are ideologies that increasingly influence public perceptions and policy debates related to AGI. Emile and Timnit will examine the intellectual underpinningsof these ideologies, which purport to answer the question: Is AGI a transformative technology that will usher in a new age of abundance and prosperity, or will it pose dire threats to humanity?
And now for the introductions
Dr. Emile P. Torres is a philosopher and historian whose work has focused onglobal catastrophic risks and human extinction. They have published widely on a range of topics, including religious end-time narratives, climate change and emerging technologies. They are the author of the book Human Extinction: A History of the Science and Ethics of Annihilation, which was published this year.
We are also going to hear from Dr. Timneet Gebru, a computer scientist whose work focuses on algorithmic bias and data mining. As an advocate for diversity in technology, Timnit co-founded Black in AI. She also founded D.A.I.R, a community-rooted institute that was created to counter Big Techs pervasive influence on research, development and deployment of AI. In 2022, Timnit was one of Time Magazines 100 Most Influential People. She continues to be a pioneering and cautionary voice regarding ethics in AGI.
To our audience, please feel free to write your questions during their discussion, wherever youre watching.And there will be a Q and A at the end.
Thank you for participating in this event. Ill hand it over to you, Emile andTimnit
mile P. Torres: Thanks so much. So I missed some of the intro due to a technical issue on my side. So maybe Ill repeat some of what you said now. Basically well be talking about this acronym that, you know, has been central to the Dig project that Ive participated in, at Truthdig, but also, it really came out of a collaboration that I was engaged in with Timnit. So I think there isnt any particular rigid structure of this conversation, but I figured we could just go over kind of the basics of the acronym, of this concept, why its important, what its relation is with artificial general intelligence and this race right now to the bottom, as it were, trying to build these ever-larger language models. And then, as mentioned, well take questions at the end. So I hope people find this to be informative and interesting. So yeah, Timnit, is there anything youd like to add? Otherwise, we can sort of jump right into what the acronym stands for and go from there.
Timnit Gebru: Yeah, lets jump in.
mile P. Torres: Okay, great. So this concept came out of, as I mentioned, this collaboration, basically, Timnit and I were writing this paper on the influence of a constellation of ideologies within the field of AI. In writing this paper, discussing some of the key figures who played a major role in shaping the contemporary field of AI, including or resulting in this kind of race to create artificial general intelligence or AGI, we found that it was sort of unmanageable because there was this cluster of different ideologies that are overlapping and interconnected in all sorts of ways. Listing them all, after the names of some of these individuals who have been influential, was just too much. So the acronym was proposed to sort of economize and streamline the discussion. So that we could ultimately get at the crux of the issue that there is this, you know, bundle of ideologies thats overlapping and interrelated in various ways. Many of these ideologies came out of previous ideologies and share certain key features. So the acronym stands for Transhumanism its a mouthful Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism and Longtermism. The way Ive come to conceptualize this bundle is that Transhumanism is sort of the backbone. If thats the case, Longtermism is sort of this galaxy brain atop the bundle because it binds together some of the major themes of other ideologies into like, kind of a single, pretty comprehensive normative futurology, or sort of worldview, about what the future can and ought to look like. So that was the impetus behind this bundle. So for example, you know, were writing about an Oxford philosopher and neo-eugenicist, named Nick Bostrom. Weve mentioned that he is a transhumanist, who participated in the Extropian movement in the 1990s, anticipates the singularity, is close colleagues with the leading modern Cosmist Ben Goertzel. Hes hugely influential, has very close ties to the Rationalist and Effective Altruism communities. In fact, his institute, the Future of Humanity Institute, shared office space for a long time it might still share office space, Im not sure, but hes for many years shared office space with the Center for Effective Altruism, which is sort of the main EA organization. And then Bostrom also is one of the founders of this Longtermist ideology. So that gives you a sense of like, okay, youre listing this one name, you know, and connecting him to all of these different ideologies. Doing that throughout the paper with other names and so on, is just unmanageable. So hence the acronym was born.
Timnit Gebru: I just want to say that my interest was primarily on, you know, the eugenics angle of the whole AGI movement. So, when I approached you about writing a paper, it was like, Okay, lets talk about how the eugenics thought is influencing this AGI movement, starting from why they want to create AGI to what they envision that it will do. I didnt, so yeah, it just kept on being like, Before we get to the point, we have to recall, as we say, in section two, that Nick Bostrom did this thing and was also part of this other institute, which is also investing in this thing. And it was just kind of impossible to get to the point that we were making. But I was also very surprised, and I dont know if this this is, was your experience, of course, like I can see the link to eugenics because Ive been around the Effective Altruists in the longterm movement and the way they talk about how, you know, we have to work on AI to save humanity and all that, and I was very irritated by it for a long time. However, its when we were working on this paper that I realized that the link is direct, like its not this roundabout kind of subtle thing. Its a direct link to eugenics. And that was very surprising to me.
mile P. Torres: Yeah. So, maybe we can elaborate on that just a bit. Because, you know, this backbone of the bundle, transhumanism, I mean, that is uncontroversially considered to be a version of eugenics. Its called, so called, liberal eugenics, which is supposed to contrast with the old authoritarian eugenics of the 20th century. Although I think there are pretty good arguments for why, in practice, a liberal eugenics program would ultimately be very illiberal, and you know, restrict freedom. So thats another topic perhaps we could go into. But yeah, I agree. I mean, transhumanism itself was developed by 20th-century eugenicists. So theres sort of, you could distinguish between the first wave and the second wave of eugenics. The main difference between those two is the methodology. So first wave eugenics was about trying to control population level reproductive patterns. So if you get individuals with so-called desirable attributes to have more children, and individuals with what are deemed to be undesirable properties to have fewer children, then over many generations, this is a transgenerational process, then you can change the frequency of certain traits within the population. So maybe the relevant trait is like, you know, intelligence, whatever that means exactly. Second wave eugenics, that was really a response to the development of certain emerging technologies, in particular, genetic engineering in the 1970s. But by the 1980s, there was plenty of talk of the possibility of nanotechnology radically enhancing us, modifying our bodies as well. And of course, AI is a big part of that as well. So thats kind of the second, thats the defining feature of the second wave of eugenics. Transhumanism, then, it was developed by these first wave eugenicists; it basically is this idea that rather than just perfecting the human stock and preventing the degeneration of humanity, or certain groups of humanity, why not just, you know, transcend humanity as a whole? If we can create, you know, the most excellent, the best version of humanity possible through selective breeding, or maybe, through emerging technologies, so-called person engineering technologies, why stop there? Why not try to create this sort of like, superior post-human species? So that idea, that goes back to the, like, early 20th century. And then really it merged with the second wave methodology in the second half of the 20th century, in particular, late 1980s, early 1990s is when modern transhumanism emerged. So all of this is to say, youre exactly right, that the connection between this TESCREAL bundle via transhumanism and eugenics is quite direct.
Timnit Gebru: Right. But what I was saying was also that the link to the origins of the drive to create AGI that it comes, you know, I think we were when we were looking into the TESCREAL bundle, for me, I didnt know what Cosmism was until we were reading the first book on AGI, which was written in, what 2007, by Ben Goertzel and his collaborator. And then he would and then I was like, Oh, Ive heard about this guy, but he wasnt super influential in my space, right? So I havent really had to look into him or think about him very much. And then I started reading about his Cosmist manifesto, and all of this stuff, right? And then its like, wow, okay, so this link is direct. He really wants to create AGI because he wants to create post-humans that are not even human. They called it transhuman AGI. So to me, that was theres always eugenicist undertones in artificial intelligence in general and people have written that California, obviously, you know, has had many its like the mecca of eugenics in the 20th century and many people have written about different angles of this starting from John McCarthy and some of the the people who coined the term AI, but, you know, I still hadnt seen that direct link. And so, you know, Im not you have written so much about some of these people and you were in one of the movements, you were a longtermist yourself and so youve been writing about their writings and their books. Unlike you, that has not been my profession. I am just trying to work on Im a technologist, Im just trying to work on building these things and so I only read these things when I absolutely have to. I only read whatever Ben Goertzel is writing about paradise engineering in the universe or whatever, when I absolutely have to. So working on this paper and seeing these direct links, it was very sad, actually, for me, I would say.
mile P. Torres: Yeah. I mean, so, you know, I was in the longtermist movement, as you mentioned, for many years. The word longtermism was coined in 2017. But it basically referred to people who work on before the word was out there, there were people who work on existential risk mitigation, particularly, as well as understanding the nature and number and so on, different existential risks out there. So there were, sort of, longtermists before the word existed. I was part of that community. But also the overlap between the longtermist community and the transhumanist movement is pretty significant, which is consistent with this notion that the bundle is kind of a cohesive entity that extends from the late 1980s all the way up to the present. So yeah, I was very much immersed in this movement, this community and these ideas. I have to say, though, one thing that was surprising, and upsetting for me is having been in this community, but not really having explored every little nook and cranny of it. Maybe also just being a bit oblivious to the extent to which a lot of the attitudes that animated the worst aspects of first wave eugenics were present throughout this community. Once you start looking for instances of these discriminatory attitudes, racism, ableism, sexism, xenophobia, classism and so on, they sort of pop up everywhere. So that was one surprising thing for me when we started working on the project. Ultimately, the first article that I wrote for the Dig was just kind of cataloging some of the more egregious and shocking instances of kind of unacceptable views. For example, a number of leading longtermists have approvingly cited the work of Charles Murray, you know, who is a noted racist.
Timnit Gebru: And the Effective Altruists as a whole, even the ones who are not necessarily Longtermists.
mile P. Torres: Yeah, yeah, absolutely. I mean I mentioned in one of my articles that Peter Singer published this book in the 1980s, called Should the Baby Live? and basically endorsed the use of infanticide for individuals, you know, babies who have some kind of disability. So, yes, these ideas are sort of omnipresent, and its once you start looking for them, they show up everywhere within the neighborhood of the TESCREAL bundle, including EA. And so that was something that was kind of surprising to me and disheartening as well.
Timnit Gebru: I think the first time I remember my brush with maybe I think it would be good to give people like a two-minute overview of the TESCREAL bundle, but I will just say, with Effective Altruism, I think I remember more than 10 years ago or something like that, somebody describing the idea to me and I just from the get-go, when I heard what theyre saying, Were going to use data to figure out how to give our money in the most efficient way possible, something about that just rubbed me the wrong way already because it reminds me of a lot of different things. Its making things abstract, right? Youre not really at a human level connecting with the people around you or your community, but youre on the abstract trying to think about the, you know, global something. So that was that. And then I was like, okay, but I didnt have to be around this group that much. Then I remember talking to someone who told me that they were at the Effective Altruism conference. They said their keynote speaker was Peter Thiel. I was like, okay, like Effective Altruism, Peter Thiel. Then this person explained to me how Peter Thiel was talking about how to save the world, people have to work on artificial intelligence. That is the number one thing you need to be working on. This was more than 10 years ago. And I could not believe it. And then the person went ahead to explain to me why. Well, you know, even if there was a point 000000, whatever, one chance of us creating something that is super intelligent, and that even if theres a really tiny chance of that super intelligent thing wanting to extinguish us, the most important thing to do is to make sure that that is stopped, because there will be so many people in the future. So this person said that to me back then, right, and I didnt, you know, at that time, I wasnt looking at, I didnt know what longtermism was, or anything. I just had this association with Effective Altruism and I was like, This is ridiculous, you gotta be kidding me. But what was different back then versus now is that this type of thinking was not driving the, basically the most popular and pervasive versions of artificial intelligence. The field or the systems. People doing this were fringe. And even when people like Elon Musk at that time were talking about how AI can be the devil or invoke the devil and things like that, many people in the field were, like, laughing at them. So it wasnt a situation where you had to work in the field, and really just either buy into it because thats where the money comes from, or interact with them too much. It was the kind of thing where you could avoid them. But in the last few years, it became not only impossible, but they have been at the forefront of all of the funding and all of the creation and proliferation of these huge companies, like Anthropic is one, that got hundreds of millions of dollars from Effective Altruism. And so thats why for me, I wanted to kind of make a statement about it and collaborate with you to work on this. Because I kind of feel like theyre actually preventing me from doing my job in general. But I think yeah, before we jump into it, maybe its good to, maybe you can explain a little bit like, what TESCREAL stands for, right? Weve gone through transhumanism, but then theres a number of others. Actually, we might have to include the new EACC thing there too.
mile P. Torres: Yeah. Maybe the acronym needs to get even clunkier to incorporate this new AI accelerationist movement.
Timnit Gebru: Yeah.
mile P. Torres: So yeah, very briefly, within this kind of TESCREAL movement, this community, there are two schools of thought. They differ primarily not in terms of the particular techno-utopian vision of the future. In both cases, they imagine this becoming digital, eventually colonizing space, radically augmenting our intellectual abilities and so on, becoming immortal. But they differ on their probability estimates that AGI is going to kill everybody. So youve got accelerationists who think that the probability is low. In general, theres some nuances to add there. But then there are Doomers, AI Doomers. So Eliezer Yudkowsky is maybe the best example.
Timnit Gebru: Didnt he think that the singularity was coming in 2023?
mile P. Torres: That was a long time ago. I think in the early 2000s his views shifted. He got a bit more anxious about the singularity. Maybe the singularity is not going to inevitably result in this kind of wonderful paradisiacal world in the future, but actually could destroy humanity. But anyway, so yeah, the TESCREAL bundle is Transhumanism, this notion that we should use technology to radically enhance the human organism. The second letter is Extropianism. This was the first organized transhumanist movement which really emerged most significantly in the early 1990s and was associated with something called the Extropy Institute, founded by a guy named Max More. And then Singularitarianism, this is also kind of just a version of transhumanism that puts special emphasis on the singularity, which has a couple different definitions but the most influential has to do with this notion of intelligence explosion. So once we create an AI system that is sufficiently intelligent it will begin this process of recursive self-improvement. And then very quickly, you go from having a human level AI to having a vastly super intelligent entity that just towers over us to the extent that we tower over the cockroach, something like that. So thats singularitarianism. And then Cosmism is kind of, you know, transhumanism on steroids. In a certain sense, its about not just radically modifying ourselves, but eventually colonizing space and engaging in things like space-time engineering. So this is just like manipulating the universe at the most fundamental level to make the universe into what we want it to be. So thats the heart of Cosmism. It has a long history going back to the Russian Cosmists in the latter 19th century, but were really focused on the modern form that came out of what was articulated by Ben Goertzel, the individual who christened the term AGI in 2007. So then Rationalism is like, basically, if were going to create this techno-utopian world, that means that a lot of smart quote unquote people are going to have to do a lot of smart things. So maybe its good to take a step back and try to figure out how to optimize our smartness, or rationality. So that is really the heart of rationalism. How can we be maximally-
Timnit Gebru: Take emotions out of it, they say, although theyre one of the most emotional people I talked to.
mile P. Torres: Yeah, yeah. I mean, theres-
Timnit Gebru: Theyre like robots. I think that to me that Rationalism feels like, lets act like robots, because its better. Any human trait that doesnt, that is not like a robot is bad. So lets figure out how to communicate like robots. Lets figure out how to present our decision-making process like that of a computer program or something. Thats how it feels to me, which then makes sense, you know, how cultural workers are currently being treated. Like how artists and other kinds of cultural workers are being treated by this group of people.
mile P. Torres: Yeah, so I think from theRationalist view, emotions are sort of the enemy. I mean, theyre something thats going to distort clear thinking. So like, an example that I often bring up, because I feel like it just really encapsulates the sort of alienated or you might say, robotic, way of thinking, is this less wrong post from a bit more than a decade ago from Eliezer Yudkowsky in which he asked if youre in a forced choice situation, you have to pick between these two options, which do you choose? One is a single individual is tortured relentlessly and horrifically for 50 years. Another is that some enormous unfathomable number of individuals have an almost imperceptible discomfort of an eyelash in their eye? Well, if you crunch the numbers, and you really are rational, and youre not letting your emotions get in the way, then youll say that the eyelash scenario, that is worse. So if you have to choose between the two, pick the individual being tortured for 50 years. That is a better scenario than all of these individuals who just go,Oh!
Timnit Gebru: The through line the transhumanism its like the tusk part. And then the real part does not, I guess, well, the longtermists seem very much like transhumanists, but the real part does not have to be transhumanist. However, this utilitarian maximizing, some sort of utility thing, I think, that exists across all of them.
mile P. Torres: Yeah, a lot of the early transhumanists were sympathetic with utilitarianism. I mean, you dont have to be a utilitarian to be a transhumanist. Just like you dont have to be utilitarian to be an effective altruist, or even a longtermist. But as a matter of fact, utilitarianism has been hugely influential within even the transhumanists. I mean, a lot of them are consequentialists. Nick Bostrom, in one of his early papers, first paper, on existential risk, defined it in terms of transhumanism. Then a year later, he basically expanded the definition of existential risk to incorporate explicit utilitarian considerations. So that gives you a sense of how closely bound up, historically, these ideas have been. So youre totally right utilitarianism, this notion of maximizing value, whatever it is we value, if its happiness, if its jazz concerts, the more the better. You want to multiply it as much as possible. So, yeah, unless you have anything else to add to me to help continue with-
Timnit Gebru: Yeah, I think were in the EAL version.
mile P. Torres: Yeah, so the EAL part. Effective Altruism is basically just one way to think of it is its kind of what happens when rationalists rather than focusing just on rationality, pivot to focusing on morality. So the rationalists are trying to optimize their rationality, the effective altruists are trying to optimize their morality. I think there are ways of describing Effective Altruism that can be somewhat appealing. They want to do the most good possible. You look at the details, it turns out that theres all sorts of problems and deeply unpalatable-
Timnit Gebru: 20th century eugenicists also wanted to do the most good possible, right? Thats how everybody kind of describes Everybody in this movement describes themselves as wanting to save humanity, wanting to do the most good possible. Like, nobodys coming and saying, We want to be the most evil possible.
mile P. Torres: Yeah, I mean, there are many in the community who literally use the phrase saving humanity. What were doing is saving humanity. So theres, a kind of, I mean, as a matter of fact, there is a kind of grandiosity to it, a kind of Messianism. We are the individuals who are going to save humanity, perhaps by designing artificial super intelligence that leads to utopia, rather than completely annihilate humanity. So I mean, this is back when I was-
Timnit Gebru: Counteracting against the opposite one, right? We are the ones who are going to save humanity by designing the AGI god thats going to save our humanity. Also, were the ones who should guard against the opposite scenario, which is an AGI gone wrong, killing every single human possible. We are the ones who need to be the guardians. In both cases, this is the attitude of the bundle.
mile P. Torres: Yeah. That leads quite naturally to Longtermism, which is basically just what happens if youre an EA. Again, EA is hugely influenced by rationalism. But if youre an EA, and you start reading about some of the results from modern cosmology. How big is the universe? How long will the universe remain habitable? And once you register these huge numbers, all the billions, hundreds of billions of stars out there in the accessible universe and the enormous amount of time that we could continue to exist, then you can begin to estimate how many future people there could be. And that number is huge. So like one estimate is within the accessible universe, there are 10 to the 58 future people. So one followed by 58 zeros. So if the aim, as an Effective Altruist, is to positively influence the greatest number of people possible, and if most people who could exist will exist in the far future, then its only rational to focus on them rather than current-day people because theres only 1.3 billion people in multidimensional poverty. Thats a lot in absolute terms but that is a tiny number, relative to 10 to the 58. Thats supposed to be a conservative estimate. So thats ultimately how you get this longtermist view that the value of the actions we take right now depends almost entirely on the far future effects, not on the present-day effects. Thats the heart of longtermism. And thats why people are so obsessed with AGI because if we get AGI right, then we get to live forever. We get to colonize space. We get to create enormous numbers of future digital people spread throughout the universe. And in doing that, we maximize value, going back to that fundamental strain at the heart of this TESCREAL movement. We maximize value. So thats ultimately why many longtermists are obsessed with AGI. And again, if we get AGI wrong, that forecloses the realization of all this future value, which is an absolute moral catastrophe.
Timnit Gebru: I was going to say, its basically a secular religion that aligns very well with large corporations that were seeing right now and the billionaires who are funding this movement, because youre not telling them that they shouldnt be billionaires or they should just give away their resources right now for people who exist right now. Youre telling them that they need to be involved in this endeavor to save humanity from some sort of global catastrophic risk. And therefore, they need to put their intellect and their money to that use, not, you know, to the person that theyre disenfranchising, or the person theyre exploiting. For instance, you know Elon Musk had the biggest racial discrimination case in Californias history because of what he was doing to his workers. And of course, then he said all sorts of other things. But in this ideology, youre telling him No, no, this is a small concern. This is not a big concern. You as a very important and smart person have to be thinking about the far future and making sure that you save all of humanity. Dont worry about this little concern of racial discrimination in your factory. So the reason I became involved in this bundle is because, or not involved in this bundle, sorry, analyzing this bundle is because, you know, being in the field of AI and seeing their growing influence, from, you know, the DeepMind days where now I know, the founders of DeepMind, especially Shane Legg, are in this bundle. The other thing to note is that they all go to the same conferences, are in each others movements. Thats why we made it, you know, one acronym. Effective altruists are very much involved in rationalism and rationality and very much in the other ideologies too. So we see DeepMind being founded. Its one of the most well-known companies whose explicit goal was to create this AGI, this artificial general intelligence, thats going to bring people utopia. Then we see it was funded by billionaires in this bundle like Elon Musk and Peter Thiel. Then we see Nick Bostroms superintelligence coming out, where he warns about both utopia if we build some super intelligent thing and apocalypse if we get it wrong. Then you start having people like Elon Musk going around talking about how were going to have the devil. Then once Google buys DeepMind, you have them all panicking saying they need to create their own, basically DeepMind that is quote, unquote, open. I dont know if OpenAI still has this in their company page but they were saying that if somebody else achieves beneficial AGI, they will think that their mission is complete. How nice of them. Then these people in this bundle come along and they panic; they say theyre going to create OpenAI to once again save humanity. And I remember how angry I was when that announcement came out. I wrote a whole letter just to myself about it because I didnt buy it. It was this Saviorism by this really homogeneous group of people. Then of course, now we have a similar thing going, which is OpenAI is essentially bought by Microsoft, as far as Im concerned. And then you have them panicking yet again with the Future of Life Institute, Max Tegmark, each of these people we can say so much about, coming up with this letter saying that we need to pause AI and things like that. It got so much attention. It was signed by people, including Elon Musk, saying we need to pause AI and then the next day, what happens? Elon Musk announced his X-AI thing. So its like this cycle that goes on every few years both utopian and apocalypse, right? Oh, were gonna bring Utopia. No, and there might be an apocalypse. Were gonna break this. Its the same people. Two sides of the same coin. And, you know, Im only seeing this growing after OpenAI. OpenAI wasnt effective altruists enough for a set of people. They left and founded Anthropic. Anthropic got hundreds of millions of dollars from TESCREAL billionaires, including most of their money came from Sam Bankman-Fried who, who got his money, basically he was convinced to earn his money by the Center for Effective Altruists by saying that you know, you have your earn to give thing where you earn as much money as possible and give it away to effective altruist causes. And of course his cause was stopping the AGI apocalypse or bringing the AGI utopia. And so then he gives all this money to Anthropic. And now you have all of these organizations who are incredibly influential, in the mainstream. They are no longer fringe like they were 10 years ago. And thats why were here today talking about them.
mile P. Torres: Yeah, maybe Ill just add something real quick to that, which is that, you know, years ago, when I was really active in this community, I remember having conversations with people about how in the heck do we get people in power to pay attention to AI, in particular, super intelligence. And it was just such a struggle to convince individuals like, you know, Geoffrey Hinton, for example, Yoshua Bengio, and so on. How do we convince them that super intelligence is either going to result in a techno-utopian world, which will live forever, we colonize space, and so on or its complete annihilation. So there was a huge struggle, and its just amazing to witness over the past-
Timnit Gebru: Its unfortunate. Especially with Yoshua because he was not in that bundle. And I knew him. I had spoken to him for a long time, not as much now. His brother was my manager. And he was not in this whole existential risk, then he just all of a sudden, you know, were all trying to figure out whats going on because his brother has the complete opposite view. Hes definitely not in that crew. But Yoshua talked to Max Tegmark and all of a sudden, hes in full-blown Doomer mode. And this is why I think its secular religion. Im trying to understand what is it that makes scientists want to have that. Is it because they want to feel super important? So Cho, Kyunghyun Cho, who used to be Yoshuas postdoc, and is very influential in natural language processing and deep learning, recently came out and said, thankfully, that hes very aware that, you know, ideologies like EA are the ones that are driving this whole existential risk and doomer narrative. He said that there are many people in Silicon Valley who feel like they need to save the world. Its only them who can do it. And this is a widespread kind of feeling. Im glad he spoke up and I think more researchers like him need to speak up. But thats very unfortunate that back about 10 years ago, people like Yoshua, were not taking people like Elon Musk seriously. And Geoff Hinton, I mean, his student, Ilya, is one of the founders of OpenAI and nearly as full-on in this kind of bundle. So Im not surprised that he said that, but you know, to give you an example, a sense of how they minimize our current present-day concerns in lieu of this abstract representation of the apocalypse that supposedly everybody should be concerned about, Geoff Hinton was asked on CNN about my concerns about language models, because I got fired for a number of my concerns. Meredith Whittaker was pushed out because she was talking about Googles use of AI for the military. He said that my concerns were minuscule compared to his. This is the way they get to dismiss our present-day concerns while actually helping bring them about through their involvement in these various companies that are centralizing power and creating products that marginalize communities.
mile P. Torres: Yeah. So thanks for that, Timnit. Should we maybe try to answer a few questions? So maybe Ill read one out, but is the most recent question good for you, Timnit?
Timnit Gebru: Yeah, sure.
mile P. Torres: So okay, Ill read it out. Question for the speakers. Where do researchers like Geoffrey Hinton fall? I very much agree that people like Elon Musk in OpenAI have been extremely inconsistent.
Timnit Gebru: So I can answer a little bit on that question. Personally, when you look at the way in which weve described the TESCREAL bundle, and the fact that the AGI utopia and apocalypse are two sides of the same coin, to me, Elon Musk has been consistent. Because his position is always whenever he feels like he cannot control a company thats creating, thats purporting to create AGI he panics and says, Were going to have an apocalypse. Thats what happened in 2013, or when you know, or 2014, when DeepMind was acquired by Google. Thats what happened when OpenAI is getting tons of money from Microsoft. And thats what happened just now, when he signed and publicized the letter from the Future of Life Institute saying that we need to pause AI. Then the next day, he announces his own thing. This is exactly what he did back in 2015, too. He complained and then the next day he announced his own thing. So thats what I I think hes been super consistent. People like Geoff Hinton hadnt been in this bundle, but theyre students so what happened is the merger between the deep learning crew, which wasnt necessarily in this bundle, like Yoshua and Geoffrey Hinton and all that, that have been around for decades, and with companies like DeepMind and OpenAI, you now have the merger between deep learning and machine learning researchers and people in the TESCREAL bundle. And so what were seeing with people like Geoff Hinton is that his student, Ilya Sutskever, was cofounder of OpenAI, and now you know, hes in that bundle. And so Geoff Hinton is going around and but if you look at his talks and arguments, its so sad. A lot of women especially have been talking about how much of what he says in this area makes no sense. So yeah, so that is kind of my point of view on the machine learning side.
mile P. Torres: Alright. So, next question. Ill take one quickly from Peter, who asks, What do you see as the flaw in the longtermist reasoning because most of the philosophical counters to longtermism seem to imply antinatalism. So antinatalism is this view that you, there are different versions of it, but one is that its wrong to have children. Or that birth has a negative value, something of that sort.
Timnit Gebru: Why do we need both extremes? This is what I dont understand.
mile P. Torres: Yeah, this is exactly what Im going to say. I mean, first of all, I think the flaws with longtermism, that would be a whole hourlong talk. So maybe I could just direct you to a forthcoming book chapter I have, which is nice and short and to the point that, I think, provides a novel argument for why the longtermist view is pretty fundamentally problematic. Its called Consciousness, Colonization and Longtermism. I put it up on my website. The other thing is, antinatalism, this is not the alternative, or the alternatives do not imply antinatalism. I mentioned before, in writing and on podcasts, and so on, long-term thinking is not the same as longtermism. You can be an advocate, a passionate advocate, for long-term thinking, as I am, and not be a longtermist. You can not believe that we have this kind of moral obligation to go out, colonize, plunder the cosmos, our so-called cosmic endowment of negative entropy, or neg entropy, and then create, you know, the maximum number of people in the future in order to maximize value. Thats accepted, even on a moderate longtermist view, and that is very radical. And so you can reject that and still say, I really care about future generations. I care about their well-being, hence, I care about climate change, I care about nuclear waste, how thats stored, and so on and so on. So I would take issue with the way that the question itself is couched.
Timnit Gebru: Yeah. And why does it only have to come from Western philosophy, the counter to longtermism, right? Theres many different groups of people who have had long-term thinking and their idea of it is safeguarding nature, working together with nature and thinking about future generations. Theres so many examples of this that dont have to come from European kind of thought. So I just, you know, we didnt need longtermism, and now we have it. And now were wasting our time trying to get rid of it.
mile P. Torres: Let me just add real fast, now that I have finished this big book on the history of thinking about human extinction in the West, because basically, I was part of this TESCREAL bundle and I was like, oh, whats the history? So thats what the book ended up being. Now that Ive done that, Im just more convinced than ever, that the Western approach to thinking about these issues is impoverished and flawed in certain ways that havent really even been properly identified, articulated and so on. And so for me, that book project is an inflection point where I am just so unconvinced by the whole Western view and feel like its just problematic. Most of my work at this point is like trying to understand things from indigenous perspectives and you know the perspective that-
Timnit Gebru: How did you get out of longtermism? I know thats probably a conversation for another day, but Im so curious. I think, with all of our collaborations, I never asked that question like, how were you How did you get in it? And how did you get out of it? And maybe we can answer an audience question after that. But if you have a short spiel about that because I think that would be helpful in trying to figure out how to get people out of it.
mile P. Torres: Yeah, I mean, there are really three issues. So Ill go over them in insufficient detail. One is the most embarrassing, which is that I started to read and listen to philosophers and historians, and so on, scholars in general, who werent white men. So just like, wow, okay, theres this whole other perspective, this whole other paradigm, this whole other way of thinking about these issues that resulted in the techno-utopian vision, that is at the heart of the TESCREAL bundle in which I was somewhat enthusiastic about. It rendered that just patently impoverished. And so that was one of the issues. The other was just sort of studying population ethics and realizing the philosophical arguments that underlie longtermism are not nearly as strong as one might hope. Especially if longtermists are going out and shaping UN policy and the decisions of tech billionaires. And the other one was just reading about the history of utopian movements that became violent, and realizing that, okay, a lot of these movements combined two elements: a utopian vision of the future, and a kind of broadly utilitarian mode of moral reasoning. When you put those together, then if the ends justify the means and if the ends are utopia, then what means are off the table? So that was the other sort of epiphany I had is like Wow, longtermism could actually be really dangerous. It could recapitulate the same kind of violence, extreme actions, that we witnessed throughout the 20th century with a lot of utopian movements.
Timnit Gebru: And they explicitly say that some of those tragedies are just a blip, right? Theyre not as bad as, like, the tragedy of not having the utopia that they think we all are destined to have.
mile P. Torres: This is the galaxy brain part, when you take a truly cosmic perspective on things, even the worst atrocities or the worst disasters of the 20th century, World War II, 1918 Spanish flu and so on, those are just are mere ripples on the surface of the great sea of life to quote Nick Bostrom. So theres a kind of from this grand cosmic perspective, it kind of inclines people to adopt this view, to minimize or trivialize anything that is sub existential, anything that doesnt directly threaten-
Timnit Gebru: Theres a good question here and theres multiple of them. Sharon has two questions, which Ill lump into one. One is about the degree of the ethics being emphasized and factored into data collection and cleaning processes required by machine learning systems. And you know, theres a vast underclass that has emerged tasked into feeding data into these systems. What are your thoughts on this? And how does it play into your own research or work? Well, for me, personally, Ive done. My institute has worked on the exploited workers behind AI systems. And so whats really interesting is while you have the TESCREAL organizations like OpenAI talking and you can just go read what Sam Altman writes and what Ilya and the rest of them write and you know, while theyre talking about how utopia is around the corner, and they were talking about how they have announced this huge AGI alignment group, and theyre gonna save us, theyre simultaneously disenfranchising a lot of people. They have a bunch of people that they have hired. Karen Hao just had a great day article recently in The Wall Street Journal about the Kenyan workers who were filtering out the outputs of ChatGPT. And one of them was saying how just five months of working on this, like made him kind of the mental state that he was in afterwards made him lose his entire family. Just five months, right? So thats whats going on. So as theyre talking about how AGI is around the corner, and how theyre about to create this super intelligent being that needs to be regulated because its more powerful than everything weve ever thought of, theyre very intentionally obfuscating the actual present-day harm that they are causing by stealing peoples data like creatives. And it makes total sense to me that theyre thinking about just automating away human artists, right? Because thats just like the non-good part about being human for them. That part that they want to transcend. But also it helps them make a lot of money. So theyre stealing data. Theyre exploiting a lot of workers and traumatizing them in this process. However, if you take this cosmic view, like mile was saying, these are just blips on the way to utopia, so its fine. Its okay for them to do this on the way to the utopia that were all going to have if we get the AGI thats going to save humanity.
mile P. Torres: Yeah, so basically, I think longtermists would say that, okay, some of these things are bad. But again, theres an ambiguity there. Theyre bad in an absolute sense. But relatively speaking, like they, they really just are. I mean, the 1918 Spanish flu killed, like just millions and millions of people. And that is just a mere ripple. Its just a tiny little blip from the grand scheme. So all of the harms now, like, its not that we should completely dismiss them. But dont worry too much about them, because there are much bigger fish to fry, like getting utopia right. By ensuring that AGI we create is properly value aligned, it does what we say. So, for example, when we say cure aging, it does that, it takes about a minute to think about it, and it cures aging. Colonize space, you know, thats what matters just so much more, because theres astronomical amounts of value in the future. And the loss of that value is a much greater tragedy than whatever harms could possibly happen right now to current people.
Timnit Gebru: So Michael asks, if pausing AI research is something we should be skeptical of, what sorts of policies should we support to prevent immediate harms posed by AI systems? Thats a great question, because when we saw this pause AI letter, we had to come up with a response. So Ill link to it. But in our response, we said that we need to consider things like how the information ecosystem is being polluted by synthetic text coming out of things like large language models. We need to consider labor and whats happening to all the exploited workers and all the people, these companies are trying to devalue their laborers and displace them. We need to think about all of the harmful ways in which AI is being used, whether it is at the border to disenfranchise refugees, or you know, bias and face people being falsely accused of crimes based on being misidentified by face recognition, etc. So, first, I think we need to address the labor issue and the data issue. So, this is what they do, right? When theyre talking about this large cosmic, whatever galaxy thing, you think that there isnt mundane day-to-day stuff that theyre doing that we can, like a normal corporation is doing, that can be regulated by normal agencies that have jurisdiction. So we can make sure that we can analyze the data that theyre using to train these systems and make sure that they have to be transparent about it, as in, you know, prove to us that youre not using peoples stolen data. For instance, make it opt in, not opt out. And also make it difficult for them to exploit labor like they are. Thats just one example. But I will, just to be brief, I will post our one-page response to that pausing AI letter on the chat and so maybe you can see it in the comments or something like that.
mile P. Torres: So were more or less out of time. But one of the harms that also doesnt get enough attention is on the one hand, the release of ChatGPT, just releasing it into society, sort of upended systems within a lot of universities because suddenly students were able to cheat. And it was really difficult to I knew multiple professors who had students who turned in papers that were actually authored by ChatGPT. But the flip side of that is that there are also some students who have been accused of plagiarizing, meaning using ChatGPT, that actually didnt, and Timnit, you were just tweeting the other day about a student.
Timnit Gebru: And this is kind of this cosmic view that were talking about allows these companies to deceive people about their capability. So for example, OpenAI, if it makes you believe that theyve created this super intelligent thing, then youre going to think that, and then youre going to use it and many of their systems. Similarly, if they deceive you into thinking that theyve created a detector that detects the outputs, whether the output is from ChatGPT or not, with high accuracy, youre going to use it. So whats happening is that people have been using these kinds of systems to falsely accuse students of not creating original work. So OpenAI quietly depreciated their detector, and its really interesting how loud they are about their supposed capabilities of their systems and how quiet they were about, you know, removing this detector. So, I think, for me, my message to people would be, dont buy into this super intelligence hype. Keep your eye on the present-day dangers of these systems, which are based on very old ideas of imperialism, colonization, centralization of power, maximizing profit, and not on safeguarding human welfare. And thats not a futuristic problem, thats an old problem that still exists today.
mile P. Torres: So that ties into maybe well take one last question. Were so sorry to everybody who asked a question we didnt get to. Genuine apologies for that. Okay, so for those who may not have the tech background, what conversations do you think must happen from below? Especially as this targets marginalized communities, Global South, class, and so on? Timnit, your thoughts on that? I can say a few things but-
Timnit Gebru: I can say something shortly and then Im curious to hear your thoughts. Well, you know, to know the harms, you dont have to have a tech background. So thats a good thing to remember, right? When something is harmful, you dont have to know the ins and outs of how it works. And often the people who do know the issues are people with lived experience of either being algorithmically surveilled or losing their jobs or of being accused of something they didnt do. The student who emailed us, who was falsely accused of writing an essay, I mean, plagiarizing an essay, didnt know anything about how it worked to know that this was an injustice. So I think thats the first point. The first point is people need to know that they need to be part of the conversation and they dont need to know how it works. Theres a concerted effort to mislead you, as to the capabilities of current AI systems. The second point to me is that we should be very skeptical of companies that claim to be building something all-knowing and complain and say Oh my God, this all-knowing thing needs to be regulated, and then complain when it is. Thats what OpenAI did. They went to the U.S. Congress, and said that there needs to be regulation, and theyre scared. Then the EU regulates it and theyre like, Oh, we might have to pull out of the EU. So just think of it as entities using certain systems, and whether those entities are doing the right thing and those systems are harmful or not, theres really nothing new about this new set of technologies that can be used to disenfranchise people. As much as possible, I highly recommend people if they are in tech or were thinking about policy investing in small local organizations that dont have to depend on these large multinational corporations. And thinking about how the fuel for this exploitation is data and labor. So thinking about where that comes from, how people can be adequately compensated for that, and for peoples data and not to be taken without their consent.
mile P. Torres: The only thing I would add to that is tying this back to the central thrust of this whole conversation just, I think, being aware of some of the ideologies that are shaping the rhetoric, the goals and so on, of these leading AI companies. And sort of fitting the pieces together and understanding why it is that theres this race to create AGI. Again, you know, these ideologies that fit within the TESCREAL bundle if not for the fact that they are immensely influential, that they are shaping some of the most powerful individuals in the tech world from Elon Musk to Sam Altman, and so on If it werent for that fact, then perhaps a lot of this would be a reason to chuckle. But I mean, it is hugely influential. So I think the first step in figuring out a good way to combat the rise of these ideologies is at least just understanding what they are, how they fit together and the ways in which theyre shaping the world we live in today.
Timnit Gebru: I was gonna say Nitasha Tiku has a great article that just came out in The Washington Post that details the amount of money thats going into the kind of this AI doomerism on the Stanford campus from the effective altruists. So this is just one angle, but I think its good to know how much money and influence is going into this.
mile P. Torres: Alright, so thanks for thanks for having us. I think Zuade might come in a moment.
Zuade Kaufman: I just wanted to thank you. That was just so intriguing and important. And thank you for all your work and for being part of Truthdig.
mile P. Torres: Thanks for hosting us.
Zuade Kaufman: Yeah, just keep the information rolling. And I know you also provided some links in the chat that well share with our readership, whatever readings that you think they should be doing further, of course, buying your book and continuing. Thank you so much.
If you're reading this, you probably already know that non-profit, independent journalism is under threat worldwide. Independent news sites are overshadowed by larger heavily funded mainstream media that inundate us with hype and noise that barely scratch the surface. We believe that our readers deserve to know the full story. Truthdig writers bravely dig beneath the headlines to give you thought-provoking, investigative reporting and analysis that tells you whats really happening and whos rolling up their sleeves to do something about it.
Like you, we believe a well-informed public that doesnt have blind faith in the status quo can help change the world. Your contribution of as little as $5 monthly or $35 annually will make you a groundbreaking member and lays the foundation of our work.
Read more:
What's Behind the Race to Create Artificial General Intelligence? - Truthdig
- How much time do we have before Artificial General Intelligence (AGI) to turns into Artificial Self-preserving - The Times of India - November 5th, 2024 [November 5th, 2024]
- Simuli to Leap Forward in the Trek to Artificial General Intelligence through 2027 Hyperdimensional AI Ecosystem - USA TODAY - November 5th, 2024 [November 5th, 2024]
- Implications of Artificial General Intelligence on National and International Security - Yoshua Bengio - - October 31st, 2024 [October 31st, 2024]
- James Cameron says the reality of artificial general intelligence is 'scarier' than the fiction of it - Business Insider - October 31st, 2024 [October 31st, 2024]
- James Cameron says the reality of artificial general intelligence is 'scarier' than the fiction of it - MSN - October 31st, 2024 [October 31st, 2024]
- Bot fresh hell is this?: Inside the rise of Artificial General Intelligence or AGI - MSN - October 31st, 2024 [October 31st, 2024]
- Artificial General Intelligence (AGI) Market to Reach $26.9 Billion by 2031 As Revealed In New Report - WhaTech - September 26th, 2024 [September 26th, 2024]
- 19 jobs artificial general intelligence (AGI) may replace and 10 jobs it could create - MSN - September 26th, 2024 [September 26th, 2024]
- Paige Appoints New Leadership to Further Drive Innovation, Bring Artificial General Intelligence to Pathology, and Expand Access to AI Applications -... - August 16th, 2024 [August 16th, 2024]
- Artificial General Intelligence, If Attained, Will Be the Greatest Invention of All Time - JD Supra - August 11th, 2024 [August 11th, 2024]
- OpenAI Touts New AI Safety Research. Critics Say Its a Good Step, but Not Enough - WIRED - July 22nd, 2024 [July 22nd, 2024]
- OpenAIs Project Strawberry Said to Be Building AI That Reasons and Does Deep Research - Singularity Hub - July 22nd, 2024 [July 22nd, 2024]
- One of the Best Ways to Invest in AI Is Dont - InvestorPlace - July 22nd, 2024 [July 22nd, 2024]
- OpenAI is plagued by safety concerns - The Verge - July 17th, 2024 [July 17th, 2024]
- OpenAI reportedly nears breakthrough with reasoning AI, reveals progress framework - Ars Technica - July 17th, 2024 [July 17th, 2024]
- ChatGPT maker OpenAI now has a scale to rank its AI - ReadWrite - July 17th, 2024 [July 17th, 2024]
- Heres how OpenAI will determine how powerful its AI systems are - The Verge - July 17th, 2024 [July 17th, 2024]
- OpenAI may be working on AI that can perform research without human help which should go fine - TechRadar - July 17th, 2024 [July 17th, 2024]
- OpenAI has a new scale for measuring how smart their AI models are becoming which is not as comforting as it should be - TechRadar - July 17th, 2024 [July 17th, 2024]
- OpenAI says there are 5 'levels' for AI to reach human intelligence it's already almost at level 2 - Quartz - July 17th, 2024 [July 17th, 2024]
- AIs Bizarro World, were marching towards AGI while carbon emissions soar - Fortune - July 17th, 2024 [July 17th, 2024]
- AI News Today July 15, 2024 - The Dales Report - July 17th, 2024 [July 17th, 2024]
- The Evolution Of Artificial Intelligence: From Basic AI To ASI - Welcome2TheBronx - July 17th, 2024 [July 17th, 2024]
- What Elon Musk and Ilya Sutskever Feared About OpenAI Is Becoming Reality - Observer - July 17th, 2024 [July 17th, 2024]
- Companies are losing faith in AI, and AI is losing money - Android Headlines - July 17th, 2024 [July 17th, 2024]
- AGI isn't here (yet): How to make informed, strategic decisions in the meantime - VentureBeat - June 16th, 2024 [June 16th, 2024]
- Apple's AI Privacy Measures, Elon Musk's Robot Prediction, And More: This Week In Artificial Intelligence - Alphabet ... - Benzinga - June 16th, 2024 [June 16th, 2024]
- AGI and jumping to the New Inference Market S-Curve - CMSWire - June 16th, 2024 [June 16th, 2024]
- Apple's big AI announcements were all about AI 'for the rest of us'Google, Meta, Amazon and, yes, OpenAI should ... - Fortune - June 16th, 2024 [June 16th, 2024]
- Elon Musk Withdraws His Lawsuit Against OpenAI and Sam Altman - The New York Times - June 16th, 2024 [June 16th, 2024]
- Staying Ahead of the AI Train - ATD - June 16th, 2024 [June 16th, 2024]
- OpenAI disbands its AI risk mitigation team - - May 20th, 2024 [May 20th, 2024]
- BEYOND LOCAL: 'Noise' in the machine: Human differences in judgment lead to problems for AI - The Longmont Leader - May 20th, 2024 [May 20th, 2024]
- Machine Learning Researcher Links OpenAI to Drug-Fueled Sex Parties - Futurism - May 20th, 2024 [May 20th, 2024]
- What Is AI? How Artificial Intelligence Works (2024) - Shopify - May 20th, 2024 [May 20th, 2024]
- Vitalik Buterin says OpenAI's GPT-4 has passed the Turing test - Cointelegraph - May 20th, 2024 [May 20th, 2024]
- "I lost trust": Why the OpenAI team in charge of safeguarding humanity imploded - Vox.com - May 18th, 2024 [May 18th, 2024]
- 63% of surveyed Americans want government legislation to prevent super intelligent AI from ever being achieved - PC Gamer - May 18th, 2024 [May 18th, 2024]
- Top OpenAI researcher resigns, saying company prioritized 'shiny products' over AI safety - Fortune - May 18th, 2024 [May 18th, 2024]
- The revolution in artificial intelligence and artificial general intelligence - Washington Times - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Yahoo! Voices - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands safety team focused on risk of artificial intelligence causing 'human extinction' - New York Post - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Port Lavaca Wave - May 18th, 2024 [May 18th, 2024]
- OpenAI disbands team devoted to artificial intelligence risks - Moore County News Press - May 18th, 2024 [May 18th, 2024]
- Generative AI Is Totally Shameless. I Want to Be It - WIRED - May 18th, 2024 [May 18th, 2024]
- OpenAI researcher resigns, claiming safety has taken a backseat to shiny products - The Verge - May 18th, 2024 [May 18th, 2024]
- Most of Surveyed Americans Do Not Want Super Intelligent AI - 80.lv - May 18th, 2024 [May 18th, 2024]
- A former OpenAI leader says safety has 'taken a backseat to shiny products' at the AI company - Winnipeg Free Press - May 18th, 2024 [May 18th, 2024]
- DeepMind CEO says Google to spend more than $100B on AGI despite hype - Cointelegraph - April 20th, 2024 [April 20th, 2024]
- Congressional panel outlines five guardrails for AI use in House - FedScoop - April 20th, 2024 [April 20th, 2024]
- The Potential and Perils of Advanced Artificial General Intelligence - elblog.pl - April 20th, 2024 [April 20th, 2024]
- DeepMind Head: Google AI Spending Could Exceed $100 Billion - PYMNTS.com - April 20th, 2024 [April 20th, 2024]
- Say hi to Tong Tong, world's first AGI child-image figure - ecns - April 20th, 2024 [April 20th, 2024]
- Silicon Scholars: AI and The Muslim Ummah - IslamiCity - April 20th, 2024 [April 20th, 2024]
- AI stocks aren't like the dot-com bubble. Here's why - Quartz - April 20th, 2024 [April 20th, 2024]
- AI vs. AGI: The Race for Performance, Battling the Cost? for NASDAQ:GOOG by Moshkelgosha - TradingView - April 20th, 2024 [April 20th, 2024]
- We've Been Here Before: AI Promised Humanlike Machines In 1958 - The Good Men Project - April 20th, 2024 [April 20th, 2024]
- Google will spend more than $100 billion on AI, exec says - Quartz - April 20th, 2024 [April 20th, 2024]
- Tech companies want to build artificial general intelligence. But who decides when AGI is attained? - The Bakersfield Californian - April 8th, 2024 [April 8th, 2024]
- Tech companies want to build artificial general intelligence. But who decides when AGI is attained? - The Caledonian-Record - April 8th, 2024 [April 8th, 2024]
- What is AGI and how is it different from AI? - ReadWrite - April 8th, 2024 [April 8th, 2024]
- Artificial intelligence in healthcare: defining the most common terms - HealthITAnalytics.com - April 8th, 2024 [April 8th, 2024]
- We're Focusing on the Wrong Kind of AI Apocalypse - TIME - April 8th, 2024 [April 8th, 2024]
- Xi Jinping's vision in supporting the artificial intelligence at home and abroad - Modern Diplomacy - April 8th, 2024 [April 8th, 2024]
- As 'The Matrix' turns 25, the chilling artificial intelligence (AI) projection at its core isn't as outlandish as it once seemed - TechRadar - April 8th, 2024 [April 8th, 2024]
- AI & robotics briefing: Why superintelligent AI won't sneak up on us - Nature.com - January 10th, 2024 [January 10th, 2024]
- Get Ready for the Great AI Disappointment - WIRED - January 10th, 2024 [January 10th, 2024]
- Part 3 Capitalism in the Age of Artificial General Intelligence (AGI) - Medium - January 10th, 2024 [January 10th, 2024]
- Artificial General Intelligence (AGI): what it is and why its discovery can change the world - Medium - January 10th, 2024 [January 10th, 2024]
- Exploring the Path to Artificial General Intelligence - Medriva - January 10th, 2024 [January 10th, 2024]
- The Acceleration Towards Artificial General Intelligence (AGI) and Its Implications - Medriva - January 10th, 2024 [January 10th, 2024]
- OpenAI Warns: "AGI Is Coming" - Do we have a reason to worry? - Medium - January 10th, 2024 [January 10th, 2024]
- The fight over ethics intensifies as artificial intelligence quickly changes the world - 9 & 10 News - January 10th, 2024 [January 10th, 2024]
- AI as the Third Window into Humanity: Understanding Human Behavior and Emotions - Medriva - January 10th, 2024 [January 10th, 2024]
- Artificial General Intelligence (AGI) in Radiation Oncology: Transformative Technology - Medriva - January 10th, 2024 [January 10th, 2024]
- Exploring the Potential of AGI: Opportunities and Challenges - Medium - January 10th, 2024 [January 10th, 2024]
- Full-Spectrum Cognitive Development Incorporating AI for Evolution and Collective Intelligence - Medriva - January 10th, 2024 [January 10th, 2024]
- Artificial Superintelligence - Understanding a Future Tech that Will Change the World! - MobileAppDaily - January 10th, 2024 [January 10th, 2024]
- Title: AI Unveiled: Exploring the Realm of Artificial Intelligence - Medium - January 10th, 2024 [January 10th, 2024]
- The Simple Reason Why AGI (Artificial General Intelligence) Is Not ... - Medium - December 2nd, 2023 [December 2nd, 2023]