Archive for the ‘Artificial General Intelligence’ Category

What’s Behind the Race to Create Artificial General Intelligence? – Truthdig

Zuade Kaufman: Hello Im Zuade Kaufman, publisher of Truthdig.

As you know, at Truthdig, we dig beneath the headlines to find thought-provoking, forward-thinking ideas and conversations.

We are thrilled today to present two influential and brilliant thinkers,

Dr. Emile P. Torres and Dr. Timnit Gebru, who will be discussing the timely and important question, Whats Behind the Race to Create Artificial General Intelligence? which is also the title of todays event.

During this discussion, they will provide an overview of the bundle of ideologies known as TESCREAL, an acronym they coined while Emile was curating the Dig titled Eugenics in the Twenty-First Century: New Names, Old Ideas, which can be found on the Truthdig website.

TESCREAL are ideologies that increasingly influence public perceptions and policy debates related to AGI. Emile and Timnit will examine the intellectual underpinningsof these ideologies, which purport to answer the question: Is AGI a transformative technology that will usher in a new age of abundance and prosperity, or will it pose dire threats to humanity?

And now for the introductions

Dr. Emile P. Torres is a philosopher and historian whose work has focused onglobal catastrophic risks and human extinction. They have published widely on a range of topics, including religious end-time narratives, climate change and emerging technologies. They are the author of the book Human Extinction: A History of the Science and Ethics of Annihilation, which was published this year.

We are also going to hear from Dr. Timneet Gebru, a computer scientist whose work focuses on algorithmic bias and data mining. As an advocate for diversity in technology, Timnit co-founded Black in AI. She also founded D.A.I.R, a community-rooted institute that was created to counter Big Techs pervasive influence on research, development and deployment of AI. In 2022, Timnit was one of Time Magazines 100 Most Influential People. She continues to be a pioneering and cautionary voice regarding ethics in AGI.

To our audience, please feel free to write your questions during their discussion, wherever youre watching.And there will be a Q and A at the end.

Thank you for participating in this event. Ill hand it over to you, Emile andTimnit

mile P. Torres: Thanks so much. So I missed some of the intro due to a technical issue on my side. So maybe Ill repeat some of what you said now. Basically well be talking about this acronym that, you know, has been central to the Dig project that Ive participated in, at Truthdig, but also, it really came out of a collaboration that I was engaged in with Timnit. So I think there isnt any particular rigid structure of this conversation, but I figured we could just go over kind of the basics of the acronym, of this concept, why its important, what its relation is with artificial general intelligence and this race right now to the bottom, as it were, trying to build these ever-larger language models. And then, as mentioned, well take questions at the end. So I hope people find this to be informative and interesting. So yeah, Timnit, is there anything youd like to add? Otherwise, we can sort of jump right into what the acronym stands for and go from there.

Timnit Gebru: Yeah, lets jump in.

mile P. Torres: Okay, great. So this concept came out of, as I mentioned, this collaboration, basically, Timnit and I were writing this paper on the influence of a constellation of ideologies within the field of AI. In writing this paper, discussing some of the key figures who played a major role in shaping the contemporary field of AI, including or resulting in this kind of race to create artificial general intelligence or AGI, we found that it was sort of unmanageable because there was this cluster of different ideologies that are overlapping and interconnected in all sorts of ways. Listing them all, after the names of some of these individuals who have been influential, was just too much. So the acronym was proposed to sort of economize and streamline the discussion. So that we could ultimately get at the crux of the issue that there is this, you know, bundle of ideologies thats overlapping and interrelated in various ways. Many of these ideologies came out of previous ideologies and share certain key features. So the acronym stands for Transhumanism its a mouthful Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective Altruism and Longtermism. The way Ive come to conceptualize this bundle is that Transhumanism is sort of the backbone. If thats the case, Longtermism is sort of this galaxy brain atop the bundle because it binds together some of the major themes of other ideologies into like, kind of a single, pretty comprehensive normative futurology, or sort of worldview, about what the future can and ought to look like. So that was the impetus behind this bundle. So for example, you know, were writing about an Oxford philosopher and neo-eugenicist, named Nick Bostrom. Weve mentioned that he is a transhumanist, who participated in the Extropian movement in the 1990s, anticipates the singularity, is close colleagues with the leading modern Cosmist Ben Goertzel. Hes hugely influential, has very close ties to the Rationalist and Effective Altruism communities. In fact, his institute, the Future of Humanity Institute, shared office space for a long time it might still share office space, Im not sure, but hes for many years shared office space with the Center for Effective Altruism, which is sort of the main EA organization. And then Bostrom also is one of the founders of this Longtermist ideology. So that gives you a sense of like, okay, youre listing this one name, you know, and connecting him to all of these different ideologies. Doing that throughout the paper with other names and so on, is just unmanageable. So hence the acronym was born.

Timnit Gebru: I just want to say that my interest was primarily on, you know, the eugenics angle of the whole AGI movement. So, when I approached you about writing a paper, it was like, Okay, lets talk about how the eugenics thought is influencing this AGI movement, starting from why they want to create AGI to what they envision that it will do. I didnt, so yeah, it just kept on being like, Before we get to the point, we have to recall, as we say, in section two, that Nick Bostrom did this thing and was also part of this other institute, which is also investing in this thing. And it was just kind of impossible to get to the point that we were making. But I was also very surprised, and I dont know if this this is, was your experience, of course, like I can see the link to eugenics because Ive been around the Effective Altruists in the longterm movement and the way they talk about how, you know, we have to work on AI to save humanity and all that, and I was very irritated by it for a long time. However, its when we were working on this paper that I realized that the link is direct, like its not this roundabout kind of subtle thing. Its a direct link to eugenics. And that was very surprising to me.

mile P. Torres: Yeah. So, maybe we can elaborate on that just a bit. Because, you know, this backbone of the bundle, transhumanism, I mean, that is uncontroversially considered to be a version of eugenics. Its called, so called, liberal eugenics, which is supposed to contrast with the old authoritarian eugenics of the 20th century. Although I think there are pretty good arguments for why, in practice, a liberal eugenics program would ultimately be very illiberal, and you know, restrict freedom. So thats another topic perhaps we could go into. But yeah, I agree. I mean, transhumanism itself was developed by 20th-century eugenicists. So theres sort of, you could distinguish between the first wave and the second wave of eugenics. The main difference between those two is the methodology. So first wave eugenics was about trying to control population level reproductive patterns. So if you get individuals with so-called desirable attributes to have more children, and individuals with what are deemed to be undesirable properties to have fewer children, then over many generations, this is a transgenerational process, then you can change the frequency of certain traits within the population. So maybe the relevant trait is like, you know, intelligence, whatever that means exactly. Second wave eugenics, that was really a response to the development of certain emerging technologies, in particular, genetic engineering in the 1970s. But by the 1980s, there was plenty of talk of the possibility of nanotechnology radically enhancing us, modifying our bodies as well. And of course, AI is a big part of that as well. So thats kind of the second, thats the defining feature of the second wave of eugenics. Transhumanism, then, it was developed by these first wave eugenicists; it basically is this idea that rather than just perfecting the human stock and preventing the degeneration of humanity, or certain groups of humanity, why not just, you know, transcend humanity as a whole? If we can create, you know, the most excellent, the best version of humanity possible through selective breeding, or maybe, through emerging technologies, so-called person engineering technologies, why stop there? Why not try to create this sort of like, superior post-human species? So that idea, that goes back to the, like, early 20th century. And then really it merged with the second wave methodology in the second half of the 20th century, in particular, late 1980s, early 1990s is when modern transhumanism emerged. So all of this is to say, youre exactly right, that the connection between this TESCREAL bundle via transhumanism and eugenics is quite direct.

Timnit Gebru: Right. But what I was saying was also that the link to the origins of the drive to create AGI that it comes, you know, I think we were when we were looking into the TESCREAL bundle, for me, I didnt know what Cosmism was until we were reading the first book on AGI, which was written in, what 2007, by Ben Goertzel and his collaborator. And then he would and then I was like, Oh, Ive heard about this guy, but he wasnt super influential in my space, right? So I havent really had to look into him or think about him very much. And then I started reading about his Cosmist manifesto, and all of this stuff, right? And then its like, wow, okay, so this link is direct. He really wants to create AGI because he wants to create post-humans that are not even human. They called it transhuman AGI. So to me, that was theres always eugenicist undertones in artificial intelligence in general and people have written that California, obviously, you know, has had many its like the mecca of eugenics in the 20th century and many people have written about different angles of this starting from John McCarthy and some of the the people who coined the term AI, but, you know, I still hadnt seen that direct link. And so, you know, Im not you have written so much about some of these people and you were in one of the movements, you were a longtermist yourself and so youve been writing about their writings and their books. Unlike you, that has not been my profession. I am just trying to work on Im a technologist, Im just trying to work on building these things and so I only read these things when I absolutely have to. I only read whatever Ben Goertzel is writing about paradise engineering in the universe or whatever, when I absolutely have to. So working on this paper and seeing these direct links, it was very sad, actually, for me, I would say.

mile P. Torres: Yeah. I mean, so, you know, I was in the longtermist movement, as you mentioned, for many years. The word longtermism was coined in 2017. But it basically referred to people who work on before the word was out there, there were people who work on existential risk mitigation, particularly, as well as understanding the nature and number and so on, different existential risks out there. So there were, sort of, longtermists before the word existed. I was part of that community. But also the overlap between the longtermist community and the transhumanist movement is pretty significant, which is consistent with this notion that the bundle is kind of a cohesive entity that extends from the late 1980s all the way up to the present. So yeah, I was very much immersed in this movement, this community and these ideas. I have to say, though, one thing that was surprising, and upsetting for me is having been in this community, but not really having explored every little nook and cranny of it. Maybe also just being a bit oblivious to the extent to which a lot of the attitudes that animated the worst aspects of first wave eugenics were present throughout this community. Once you start looking for instances of these discriminatory attitudes, racism, ableism, sexism, xenophobia, classism and so on, they sort of pop up everywhere. So that was one surprising thing for me when we started working on the project. Ultimately, the first article that I wrote for the Dig was just kind of cataloging some of the more egregious and shocking instances of kind of unacceptable views. For example, a number of leading longtermists have approvingly cited the work of Charles Murray, you know, who is a noted racist.

Timnit Gebru: And the Effective Altruists as a whole, even the ones who are not necessarily Longtermists.

mile P. Torres: Yeah, yeah, absolutely. I mean I mentioned in one of my articles that Peter Singer published this book in the 1980s, called Should the Baby Live? and basically endorsed the use of infanticide for individuals, you know, babies who have some kind of disability. So, yes, these ideas are sort of omnipresent, and its once you start looking for them, they show up everywhere within the neighborhood of the TESCREAL bundle, including EA. And so that was something that was kind of surprising to me and disheartening as well.

Timnit Gebru: I think the first time I remember my brush with maybe I think it would be good to give people like a two-minute overview of the TESCREAL bundle, but I will just say, with Effective Altruism, I think I remember more than 10 years ago or something like that, somebody describing the idea to me and I just from the get-go, when I heard what theyre saying, Were going to use data to figure out how to give our money in the most efficient way possible, something about that just rubbed me the wrong way already because it reminds me of a lot of different things. Its making things abstract, right? Youre not really at a human level connecting with the people around you or your community, but youre on the abstract trying to think about the, you know, global something. So that was that. And then I was like, okay, but I didnt have to be around this group that much. Then I remember talking to someone who told me that they were at the Effective Altruism conference. They said their keynote speaker was Peter Thiel. I was like, okay, like Effective Altruism, Peter Thiel. Then this person explained to me how Peter Thiel was talking about how to save the world, people have to work on artificial intelligence. That is the number one thing you need to be working on. This was more than 10 years ago. And I could not believe it. And then the person went ahead to explain to me why. Well, you know, even if there was a point 000000, whatever, one chance of us creating something that is super intelligent, and that even if theres a really tiny chance of that super intelligent thing wanting to extinguish us, the most important thing to do is to make sure that that is stopped, because there will be so many people in the future. So this person said that to me back then, right, and I didnt, you know, at that time, I wasnt looking at, I didnt know what longtermism was, or anything. I just had this association with Effective Altruism and I was like, This is ridiculous, you gotta be kidding me. But what was different back then versus now is that this type of thinking was not driving the, basically the most popular and pervasive versions of artificial intelligence. The field or the systems. People doing this were fringe. And even when people like Elon Musk at that time were talking about how AI can be the devil or invoke the devil and things like that, many people in the field were, like, laughing at them. So it wasnt a situation where you had to work in the field, and really just either buy into it because thats where the money comes from, or interact with them too much. It was the kind of thing where you could avoid them. But in the last few years, it became not only impossible, but they have been at the forefront of all of the funding and all of the creation and proliferation of these huge companies, like Anthropic is one, that got hundreds of millions of dollars from Effective Altruism. And so thats why for me, I wanted to kind of make a statement about it and collaborate with you to work on this. Because I kind of feel like theyre actually preventing me from doing my job in general. But I think yeah, before we jump into it, maybe its good to, maybe you can explain a little bit like, what TESCREAL stands for, right? Weve gone through transhumanism, but then theres a number of others. Actually, we might have to include the new EACC thing there too.

mile P. Torres: Yeah. Maybe the acronym needs to get even clunkier to incorporate this new AI accelerationist movement.

Timnit Gebru: Yeah.

mile P. Torres: So yeah, very briefly, within this kind of TESCREAL movement, this community, there are two schools of thought. They differ primarily not in terms of the particular techno-utopian vision of the future. In both cases, they imagine this becoming digital, eventually colonizing space, radically augmenting our intellectual abilities and so on, becoming immortal. But they differ on their probability estimates that AGI is going to kill everybody. So youve got accelerationists who think that the probability is low. In general, theres some nuances to add there. But then there are Doomers, AI Doomers. So Eliezer Yudkowsky is maybe the best example.

Timnit Gebru: Didnt he think that the singularity was coming in 2023?

mile P. Torres: That was a long time ago. I think in the early 2000s his views shifted. He got a bit more anxious about the singularity. Maybe the singularity is not going to inevitably result in this kind of wonderful paradisiacal world in the future, but actually could destroy humanity. But anyway, so yeah, the TESCREAL bundle is Transhumanism, this notion that we should use technology to radically enhance the human organism. The second letter is Extropianism. This was the first organized transhumanist movement which really emerged most significantly in the early 1990s and was associated with something called the Extropy Institute, founded by a guy named Max More. And then Singularitarianism, this is also kind of just a version of transhumanism that puts special emphasis on the singularity, which has a couple different definitions but the most influential has to do with this notion of intelligence explosion. So once we create an AI system that is sufficiently intelligent it will begin this process of recursive self-improvement. And then very quickly, you go from having a human level AI to having a vastly super intelligent entity that just towers over us to the extent that we tower over the cockroach, something like that. So thats singularitarianism. And then Cosmism is kind of, you know, transhumanism on steroids. In a certain sense, its about not just radically modifying ourselves, but eventually colonizing space and engaging in things like space-time engineering. So this is just like manipulating the universe at the most fundamental level to make the universe into what we want it to be. So thats the heart of Cosmism. It has a long history going back to the Russian Cosmists in the latter 19th century, but were really focused on the modern form that came out of what was articulated by Ben Goertzel, the individual who christened the term AGI in 2007. So then Rationalism is like, basically, if were going to create this techno-utopian world, that means that a lot of smart quote unquote people are going to have to do a lot of smart things. So maybe its good to take a step back and try to figure out how to optimize our smartness, or rationality. So that is really the heart of rationalism. How can we be maximally-

Timnit Gebru: Take emotions out of it, they say, although theyre one of the most emotional people I talked to.

mile P. Torres: Yeah, yeah. I mean, theres-

Timnit Gebru: Theyre like robots. I think that to me that Rationalism feels like, lets act like robots, because its better. Any human trait that doesnt, that is not like a robot is bad. So lets figure out how to communicate like robots. Lets figure out how to present our decision-making process like that of a computer program or something. Thats how it feels to me, which then makes sense, you know, how cultural workers are currently being treated. Like how artists and other kinds of cultural workers are being treated by this group of people.

mile P. Torres: Yeah, so I think from theRationalist view, emotions are sort of the enemy. I mean, theyre something thats going to distort clear thinking. So like, an example that I often bring up, because I feel like it just really encapsulates the sort of alienated or you might say, robotic, way of thinking, is this less wrong post from a bit more than a decade ago from Eliezer Yudkowsky in which he asked if youre in a forced choice situation, you have to pick between these two options, which do you choose? One is a single individual is tortured relentlessly and horrifically for 50 years. Another is that some enormous unfathomable number of individuals have an almost imperceptible discomfort of an eyelash in their eye? Well, if you crunch the numbers, and you really are rational, and youre not letting your emotions get in the way, then youll say that the eyelash scenario, that is worse. So if you have to choose between the two, pick the individual being tortured for 50 years. That is a better scenario than all of these individuals who just go,Oh!

Timnit Gebru: The through line the transhumanism its like the tusk part. And then the real part does not, I guess, well, the longtermists seem very much like transhumanists, but the real part does not have to be transhumanist. However, this utilitarian maximizing, some sort of utility thing, I think, that exists across all of them.

mile P. Torres: Yeah, a lot of the early transhumanists were sympathetic with utilitarianism. I mean, you dont have to be a utilitarian to be a transhumanist. Just like you dont have to be utilitarian to be an effective altruist, or even a longtermist. But as a matter of fact, utilitarianism has been hugely influential within even the transhumanists. I mean, a lot of them are consequentialists. Nick Bostrom, in one of his early papers, first paper, on existential risk, defined it in terms of transhumanism. Then a year later, he basically expanded the definition of existential risk to incorporate explicit utilitarian considerations. So that gives you a sense of how closely bound up, historically, these ideas have been. So youre totally right utilitarianism, this notion of maximizing value, whatever it is we value, if its happiness, if its jazz concerts, the more the better. You want to multiply it as much as possible. So, yeah, unless you have anything else to add to me to help continue with-

Timnit Gebru: Yeah, I think were in the EAL version.

mile P. Torres: Yeah, so the EAL part. Effective Altruism is basically just one way to think of it is its kind of what happens when rationalists rather than focusing just on rationality, pivot to focusing on morality. So the rationalists are trying to optimize their rationality, the effective altruists are trying to optimize their morality. I think there are ways of describing Effective Altruism that can be somewhat appealing. They want to do the most good possible. You look at the details, it turns out that theres all sorts of problems and deeply unpalatable-

Timnit Gebru: 20th century eugenicists also wanted to do the most good possible, right? Thats how everybody kind of describes Everybody in this movement describes themselves as wanting to save humanity, wanting to do the most good possible. Like, nobodys coming and saying, We want to be the most evil possible.

mile P. Torres: Yeah, I mean, there are many in the community who literally use the phrase saving humanity. What were doing is saving humanity. So theres, a kind of, I mean, as a matter of fact, there is a kind of grandiosity to it, a kind of Messianism. We are the individuals who are going to save humanity, perhaps by designing artificial super intelligence that leads to utopia, rather than completely annihilate humanity. So I mean, this is back when I was-

Timnit Gebru: Counteracting against the opposite one, right? We are the ones who are going to save humanity by designing the AGI god thats going to save our humanity. Also, were the ones who should guard against the opposite scenario, which is an AGI gone wrong, killing every single human possible. We are the ones who need to be the guardians. In both cases, this is the attitude of the bundle.

mile P. Torres: Yeah. That leads quite naturally to Longtermism, which is basically just what happens if youre an EA. Again, EA is hugely influenced by rationalism. But if youre an EA, and you start reading about some of the results from modern cosmology. How big is the universe? How long will the universe remain habitable? And once you register these huge numbers, all the billions, hundreds of billions of stars out there in the accessible universe and the enormous amount of time that we could continue to exist, then you can begin to estimate how many future people there could be. And that number is huge. So like one estimate is within the accessible universe, there are 10 to the 58 future people. So one followed by 58 zeros. So if the aim, as an Effective Altruist, is to positively influence the greatest number of people possible, and if most people who could exist will exist in the far future, then its only rational to focus on them rather than current-day people because theres only 1.3 billion people in multidimensional poverty. Thats a lot in absolute terms but that is a tiny number, relative to 10 to the 58. Thats supposed to be a conservative estimate. So thats ultimately how you get this longtermist view that the value of the actions we take right now depends almost entirely on the far future effects, not on the present-day effects. Thats the heart of longtermism. And thats why people are so obsessed with AGI because if we get AGI right, then we get to live forever. We get to colonize space. We get to create enormous numbers of future digital people spread throughout the universe. And in doing that, we maximize value, going back to that fundamental strain at the heart of this TESCREAL movement. We maximize value. So thats ultimately why many longtermists are obsessed with AGI. And again, if we get AGI wrong, that forecloses the realization of all this future value, which is an absolute moral catastrophe.

Timnit Gebru: I was going to say, its basically a secular religion that aligns very well with large corporations that were seeing right now and the billionaires who are funding this movement, because youre not telling them that they shouldnt be billionaires or they should just give away their resources right now for people who exist right now. Youre telling them that they need to be involved in this endeavor to save humanity from some sort of global catastrophic risk. And therefore, they need to put their intellect and their money to that use, not, you know, to the person that theyre disenfranchising, or the person theyre exploiting. For instance, you know Elon Musk had the biggest racial discrimination case in Californias history because of what he was doing to his workers. And of course, then he said all sorts of other things. But in this ideology, youre telling him No, no, this is a small concern. This is not a big concern. You as a very important and smart person have to be thinking about the far future and making sure that you save all of humanity. Dont worry about this little concern of racial discrimination in your factory. So the reason I became involved in this bundle is because, or not involved in this bundle, sorry, analyzing this bundle is because, you know, being in the field of AI and seeing their growing influence, from, you know, the DeepMind days where now I know, the founders of DeepMind, especially Shane Legg, are in this bundle. The other thing to note is that they all go to the same conferences, are in each others movements. Thats why we made it, you know, one acronym. Effective altruists are very much involved in rationalism and rationality and very much in the other ideologies too. So we see DeepMind being founded. Its one of the most well-known companies whose explicit goal was to create this AGI, this artificial general intelligence, thats going to bring people utopia. Then we see it was funded by billionaires in this bundle like Elon Musk and Peter Thiel. Then we see Nick Bostroms superintelligence coming out, where he warns about both utopia if we build some super intelligent thing and apocalypse if we get it wrong. Then you start having people like Elon Musk going around talking about how were going to have the devil. Then once Google buys DeepMind, you have them all panicking saying they need to create their own, basically DeepMind that is quote, unquote, open. I dont know if OpenAI still has this in their company page but they were saying that if somebody else achieves beneficial AGI, they will think that their mission is complete. How nice of them. Then these people in this bundle come along and they panic; they say theyre going to create OpenAI to once again save humanity. And I remember how angry I was when that announcement came out. I wrote a whole letter just to myself about it because I didnt buy it. It was this Saviorism by this really homogeneous group of people. Then of course, now we have a similar thing going, which is OpenAI is essentially bought by Microsoft, as far as Im concerned. And then you have them panicking yet again with the Future of Life Institute, Max Tegmark, each of these people we can say so much about, coming up with this letter saying that we need to pause AI and things like that. It got so much attention. It was signed by people, including Elon Musk, saying we need to pause AI and then the next day, what happens? Elon Musk announced his X-AI thing. So its like this cycle that goes on every few years both utopian and apocalypse, right? Oh, were gonna bring Utopia. No, and there might be an apocalypse. Were gonna break this. Its the same people. Two sides of the same coin. And, you know, Im only seeing this growing after OpenAI. OpenAI wasnt effective altruists enough for a set of people. They left and founded Anthropic. Anthropic got hundreds of millions of dollars from TESCREAL billionaires, including most of their money came from Sam Bankman-Fried who, who got his money, basically he was convinced to earn his money by the Center for Effective Altruists by saying that you know, you have your earn to give thing where you earn as much money as possible and give it away to effective altruist causes. And of course his cause was stopping the AGI apocalypse or bringing the AGI utopia. And so then he gives all this money to Anthropic. And now you have all of these organizations who are incredibly influential, in the mainstream. They are no longer fringe like they were 10 years ago. And thats why were here today talking about them.

mile P. Torres: Yeah, maybe Ill just add something real quick to that, which is that, you know, years ago, when I was really active in this community, I remember having conversations with people about how in the heck do we get people in power to pay attention to AI, in particular, super intelligence. And it was just such a struggle to convince individuals like, you know, Geoffrey Hinton, for example, Yoshua Bengio, and so on. How do we convince them that super intelligence is either going to result in a techno-utopian world, which will live forever, we colonize space, and so on or its complete annihilation. So there was a huge struggle, and its just amazing to witness over the past-

Timnit Gebru: Its unfortunate. Especially with Yoshua because he was not in that bundle. And I knew him. I had spoken to him for a long time, not as much now. His brother was my manager. And he was not in this whole existential risk, then he just all of a sudden, you know, were all trying to figure out whats going on because his brother has the complete opposite view. Hes definitely not in that crew. But Yoshua talked to Max Tegmark and all of a sudden, hes in full-blown Doomer mode. And this is why I think its secular religion. Im trying to understand what is it that makes scientists want to have that. Is it because they want to feel super important? So Cho, Kyunghyun Cho, who used to be Yoshuas postdoc, and is very influential in natural language processing and deep learning, recently came out and said, thankfully, that hes very aware that, you know, ideologies like EA are the ones that are driving this whole existential risk and doomer narrative. He said that there are many people in Silicon Valley who feel like they need to save the world. Its only them who can do it. And this is a widespread kind of feeling. Im glad he spoke up and I think more researchers like him need to speak up. But thats very unfortunate that back about 10 years ago, people like Yoshua, were not taking people like Elon Musk seriously. And Geoff Hinton, I mean, his student, Ilya, is one of the founders of OpenAI and nearly as full-on in this kind of bundle. So Im not surprised that he said that, but you know, to give you an example, a sense of how they minimize our current present-day concerns in lieu of this abstract representation of the apocalypse that supposedly everybody should be concerned about, Geoff Hinton was asked on CNN about my concerns about language models, because I got fired for a number of my concerns. Meredith Whittaker was pushed out because she was talking about Googles use of AI for the military. He said that my concerns were minuscule compared to his. This is the way they get to dismiss our present-day concerns while actually helping bring them about through their involvement in these various companies that are centralizing power and creating products that marginalize communities.

mile P. Torres: Yeah. So thanks for that, Timnit. Should we maybe try to answer a few questions? So maybe Ill read one out, but is the most recent question good for you, Timnit?

Timnit Gebru: Yeah, sure.

mile P. Torres: So okay, Ill read it out. Question for the speakers. Where do researchers like Geoffrey Hinton fall? I very much agree that people like Elon Musk in OpenAI have been extremely inconsistent.

Timnit Gebru: So I can answer a little bit on that question. Personally, when you look at the way in which weve described the TESCREAL bundle, and the fact that the AGI utopia and apocalypse are two sides of the same coin, to me, Elon Musk has been consistent. Because his position is always whenever he feels like he cannot control a company thats creating, thats purporting to create AGI he panics and says, Were going to have an apocalypse. Thats what happened in 2013, or when you know, or 2014, when DeepMind was acquired by Google. Thats what happened when OpenAI is getting tons of money from Microsoft. And thats what happened just now, when he signed and publicized the letter from the Future of Life Institute saying that we need to pause AI. Then the next day, he announces his own thing. This is exactly what he did back in 2015, too. He complained and then the next day he announced his own thing. So thats what I I think hes been super consistent. People like Geoff Hinton hadnt been in this bundle, but theyre students so what happened is the merger between the deep learning crew, which wasnt necessarily in this bundle, like Yoshua and Geoffrey Hinton and all that, that have been around for decades, and with companies like DeepMind and OpenAI, you now have the merger between deep learning and machine learning researchers and people in the TESCREAL bundle. And so what were seeing with people like Geoff Hinton is that his student, Ilya Sutskever, was cofounder of OpenAI, and now you know, hes in that bundle. And so Geoff Hinton is going around and but if you look at his talks and arguments, its so sad. A lot of women especially have been talking about how much of what he says in this area makes no sense. So yeah, so that is kind of my point of view on the machine learning side.

mile P. Torres: Alright. So, next question. Ill take one quickly from Peter, who asks, What do you see as the flaw in the longtermist reasoning because most of the philosophical counters to longtermism seem to imply antinatalism. So antinatalism is this view that you, there are different versions of it, but one is that its wrong to have children. Or that birth has a negative value, something of that sort.

Timnit Gebru: Why do we need both extremes? This is what I dont understand.

mile P. Torres: Yeah, this is exactly what Im going to say. I mean, first of all, I think the flaws with longtermism, that would be a whole hourlong talk. So maybe I could just direct you to a forthcoming book chapter I have, which is nice and short and to the point that, I think, provides a novel argument for why the longtermist view is pretty fundamentally problematic. Its called Consciousness, Colonization and Longtermism. I put it up on my website. The other thing is, antinatalism, this is not the alternative, or the alternatives do not imply antinatalism. I mentioned before, in writing and on podcasts, and so on, long-term thinking is not the same as longtermism. You can be an advocate, a passionate advocate, for long-term thinking, as I am, and not be a longtermist. You can not believe that we have this kind of moral obligation to go out, colonize, plunder the cosmos, our so-called cosmic endowment of negative entropy, or neg entropy, and then create, you know, the maximum number of people in the future in order to maximize value. Thats accepted, even on a moderate longtermist view, and that is very radical. And so you can reject that and still say, I really care about future generations. I care about their well-being, hence, I care about climate change, I care about nuclear waste, how thats stored, and so on and so on. So I would take issue with the way that the question itself is couched.

Timnit Gebru: Yeah. And why does it only have to come from Western philosophy, the counter to longtermism, right? Theres many different groups of people who have had long-term thinking and their idea of it is safeguarding nature, working together with nature and thinking about future generations. Theres so many examples of this that dont have to come from European kind of thought. So I just, you know, we didnt need longtermism, and now we have it. And now were wasting our time trying to get rid of it.

mile P. Torres: Let me just add real fast, now that I have finished this big book on the history of thinking about human extinction in the West, because basically, I was part of this TESCREAL bundle and I was like, oh, whats the history? So thats what the book ended up being. Now that Ive done that, Im just more convinced than ever, that the Western approach to thinking about these issues is impoverished and flawed in certain ways that havent really even been properly identified, articulated and so on. And so for me, that book project is an inflection point where I am just so unconvinced by the whole Western view and feel like its just problematic. Most of my work at this point is like trying to understand things from indigenous perspectives and you know the perspective that-

Timnit Gebru: How did you get out of longtermism? I know thats probably a conversation for another day, but Im so curious. I think, with all of our collaborations, I never asked that question like, how were you How did you get in it? And how did you get out of it? And maybe we can answer an audience question after that. But if you have a short spiel about that because I think that would be helpful in trying to figure out how to get people out of it.

mile P. Torres: Yeah, I mean, there are really three issues. So Ill go over them in insufficient detail. One is the most embarrassing, which is that I started to read and listen to philosophers and historians, and so on, scholars in general, who werent white men. So just like, wow, okay, theres this whole other perspective, this whole other paradigm, this whole other way of thinking about these issues that resulted in the techno-utopian vision, that is at the heart of the TESCREAL bundle in which I was somewhat enthusiastic about. It rendered that just patently impoverished. And so that was one of the issues. The other was just sort of studying population ethics and realizing the philosophical arguments that underlie longtermism are not nearly as strong as one might hope. Especially if longtermists are going out and shaping UN policy and the decisions of tech billionaires. And the other one was just reading about the history of utopian movements that became violent, and realizing that, okay, a lot of these movements combined two elements: a utopian vision of the future, and a kind of broadly utilitarian mode of moral reasoning. When you put those together, then if the ends justify the means and if the ends are utopia, then what means are off the table? So that was the other sort of epiphany I had is like Wow, longtermism could actually be really dangerous. It could recapitulate the same kind of violence, extreme actions, that we witnessed throughout the 20th century with a lot of utopian movements.

Timnit Gebru: And they explicitly say that some of those tragedies are just a blip, right? Theyre not as bad as, like, the tragedy of not having the utopia that they think we all are destined to have.

mile P. Torres: This is the galaxy brain part, when you take a truly cosmic perspective on things, even the worst atrocities or the worst disasters of the 20th century, World War II, 1918 Spanish flu and so on, those are just are mere ripples on the surface of the great sea of life to quote Nick Bostrom. So theres a kind of from this grand cosmic perspective, it kind of inclines people to adopt this view, to minimize or trivialize anything that is sub existential, anything that doesnt directly threaten-

Timnit Gebru: Theres a good question here and theres multiple of them. Sharon has two questions, which Ill lump into one. One is about the degree of the ethics being emphasized and factored into data collection and cleaning processes required by machine learning systems. And you know, theres a vast underclass that has emerged tasked into feeding data into these systems. What are your thoughts on this? And how does it play into your own research or work? Well, for me, personally, Ive done. My institute has worked on the exploited workers behind AI systems. And so whats really interesting is while you have the TESCREAL organizations like OpenAI talking and you can just go read what Sam Altman writes and what Ilya and the rest of them write and you know, while theyre talking about how utopia is around the corner, and they were talking about how they have announced this huge AGI alignment group, and theyre gonna save us, theyre simultaneously disenfranchising a lot of people. They have a bunch of people that they have hired. Karen Hao just had a great day article recently in The Wall Street Journal about the Kenyan workers who were filtering out the outputs of ChatGPT. And one of them was saying how just five months of working on this, like made him kind of the mental state that he was in afterwards made him lose his entire family. Just five months, right? So thats whats going on. So as theyre talking about how AGI is around the corner, and how theyre about to create this super intelligent being that needs to be regulated because its more powerful than everything weve ever thought of, theyre very intentionally obfuscating the actual present-day harm that they are causing by stealing peoples data like creatives. And it makes total sense to me that theyre thinking about just automating away human artists, right? Because thats just like the non-good part about being human for them. That part that they want to transcend. But also it helps them make a lot of money. So theyre stealing data. Theyre exploiting a lot of workers and traumatizing them in this process. However, if you take this cosmic view, like mile was saying, these are just blips on the way to utopia, so its fine. Its okay for them to do this on the way to the utopia that were all going to have if we get the AGI thats going to save humanity.

mile P. Torres: Yeah, so basically, I think longtermists would say that, okay, some of these things are bad. But again, theres an ambiguity there. Theyre bad in an absolute sense. But relatively speaking, like they, they really just are. I mean, the 1918 Spanish flu killed, like just millions and millions of people. And that is just a mere ripple. Its just a tiny little blip from the grand scheme. So all of the harms now, like, its not that we should completely dismiss them. But dont worry too much about them, because there are much bigger fish to fry, like getting utopia right. By ensuring that AGI we create is properly value aligned, it does what we say. So, for example, when we say cure aging, it does that, it takes about a minute to think about it, and it cures aging. Colonize space, you know, thats what matters just so much more, because theres astronomical amounts of value in the future. And the loss of that value is a much greater tragedy than whatever harms could possibly happen right now to current people.

Timnit Gebru: So Michael asks, if pausing AI research is something we should be skeptical of, what sorts of policies should we support to prevent immediate harms posed by AI systems? Thats a great question, because when we saw this pause AI letter, we had to come up with a response. So Ill link to it. But in our response, we said that we need to consider things like how the information ecosystem is being polluted by synthetic text coming out of things like large language models. We need to consider labor and whats happening to all the exploited workers and all the people, these companies are trying to devalue their laborers and displace them. We need to think about all of the harmful ways in which AI is being used, whether it is at the border to disenfranchise refugees, or you know, bias and face people being falsely accused of crimes based on being misidentified by face recognition, etc. So, first, I think we need to address the labor issue and the data issue. So, this is what they do, right? When theyre talking about this large cosmic, whatever galaxy thing, you think that there isnt mundane day-to-day stuff that theyre doing that we can, like a normal corporation is doing, that can be regulated by normal agencies that have jurisdiction. So we can make sure that we can analyze the data that theyre using to train these systems and make sure that they have to be transparent about it, as in, you know, prove to us that youre not using peoples stolen data. For instance, make it opt in, not opt out. And also make it difficult for them to exploit labor like they are. Thats just one example. But I will, just to be brief, I will post our one-page response to that pausing AI letter on the chat and so maybe you can see it in the comments or something like that.

mile P. Torres: So were more or less out of time. But one of the harms that also doesnt get enough attention is on the one hand, the release of ChatGPT, just releasing it into society, sort of upended systems within a lot of universities because suddenly students were able to cheat. And it was really difficult to I knew multiple professors who had students who turned in papers that were actually authored by ChatGPT. But the flip side of that is that there are also some students who have been accused of plagiarizing, meaning using ChatGPT, that actually didnt, and Timnit, you were just tweeting the other day about a student.

Timnit Gebru: And this is kind of this cosmic view that were talking about allows these companies to deceive people about their capability. So for example, OpenAI, if it makes you believe that theyve created this super intelligent thing, then youre going to think that, and then youre going to use it and many of their systems. Similarly, if they deceive you into thinking that theyve created a detector that detects the outputs, whether the output is from ChatGPT or not, with high accuracy, youre going to use it. So whats happening is that people have been using these kinds of systems to falsely accuse students of not creating original work. So OpenAI quietly depreciated their detector, and its really interesting how loud they are about their supposed capabilities of their systems and how quiet they were about, you know, removing this detector. So, I think, for me, my message to people would be, dont buy into this super intelligence hype. Keep your eye on the present-day dangers of these systems, which are based on very old ideas of imperialism, colonization, centralization of power, maximizing profit, and not on safeguarding human welfare. And thats not a futuristic problem, thats an old problem that still exists today.

mile P. Torres: So that ties into maybe well take one last question. Were so sorry to everybody who asked a question we didnt get to. Genuine apologies for that. Okay, so for those who may not have the tech background, what conversations do you think must happen from below? Especially as this targets marginalized communities, Global South, class, and so on? Timnit, your thoughts on that? I can say a few things but-

Timnit Gebru: I can say something shortly and then Im curious to hear your thoughts. Well, you know, to know the harms, you dont have to have a tech background. So thats a good thing to remember, right? When something is harmful, you dont have to know the ins and outs of how it works. And often the people who do know the issues are people with lived experience of either being algorithmically surveilled or losing their jobs or of being accused of something they didnt do. The student who emailed us, who was falsely accused of writing an essay, I mean, plagiarizing an essay, didnt know anything about how it worked to know that this was an injustice. So I think thats the first point. The first point is people need to know that they need to be part of the conversation and they dont need to know how it works. Theres a concerted effort to mislead you, as to the capabilities of current AI systems. The second point to me is that we should be very skeptical of companies that claim to be building something all-knowing and complain and say Oh my God, this all-knowing thing needs to be regulated, and then complain when it is. Thats what OpenAI did. They went to the U.S. Congress, and said that there needs to be regulation, and theyre scared. Then the EU regulates it and theyre like, Oh, we might have to pull out of the EU. So just think of it as entities using certain systems, and whether those entities are doing the right thing and those systems are harmful or not, theres really nothing new about this new set of technologies that can be used to disenfranchise people. As much as possible, I highly recommend people if they are in tech or were thinking about policy investing in small local organizations that dont have to depend on these large multinational corporations. And thinking about how the fuel for this exploitation is data and labor. So thinking about where that comes from, how people can be adequately compensated for that, and for peoples data and not to be taken without their consent.

mile P. Torres: The only thing I would add to that is tying this back to the central thrust of this whole conversation just, I think, being aware of some of the ideologies that are shaping the rhetoric, the goals and so on, of these leading AI companies. And sort of fitting the pieces together and understanding why it is that theres this race to create AGI. Again, you know, these ideologies that fit within the TESCREAL bundle if not for the fact that they are immensely influential, that they are shaping some of the most powerful individuals in the tech world from Elon Musk to Sam Altman, and so on If it werent for that fact, then perhaps a lot of this would be a reason to chuckle. But I mean, it is hugely influential. So I think the first step in figuring out a good way to combat the rise of these ideologies is at least just understanding what they are, how they fit together and the ways in which theyre shaping the world we live in today.

Timnit Gebru: I was gonna say Nitasha Tiku has a great article that just came out in The Washington Post that details the amount of money thats going into the kind of this AI doomerism on the Stanford campus from the effective altruists. So this is just one angle, but I think its good to know how much money and influence is going into this.

mile P. Torres: Alright, so thanks for thanks for having us. I think Zuade might come in a moment.

Zuade Kaufman: I just wanted to thank you. That was just so intriguing and important. And thank you for all your work and for being part of Truthdig.

mile P. Torres: Thanks for hosting us.

Zuade Kaufman: Yeah, just keep the information rolling. And I know you also provided some links in the chat that well share with our readership, whatever readings that you think they should be doing further, of course, buying your book and continuing. Thank you so much.

If you're reading this, you probably already know that non-profit, independent journalism is under threat worldwide. Independent news sites are overshadowed by larger heavily funded mainstream media that inundate us with hype and noise that barely scratch the surface. We believe that our readers deserve to know the full story. Truthdig writers bravely dig beneath the headlines to give you thought-provoking, investigative reporting and analysis that tells you whats really happening and whos rolling up their sleeves to do something about it.

Like you, we believe a well-informed public that doesnt have blind faith in the status quo can help change the world. Your contribution of as little as $5 monthly or $35 annually will make you a groundbreaking member and lays the foundation of our work.

Read more:

What's Behind the Race to Create Artificial General Intelligence? - Truthdig

Why Hawaii Should Take The Lead On Regulating Artificial … – Honolulu Civil Beat

A new state office of AI Safety and Regulation could take a risk-based approach to regulating various AI products.

Not a day passes without a major news headline on the great strides being made on artificial intelligence and warnings from industry insiders, academics and activists about the potentially very serious risks from AI.

A 2023survey of AI expertsfound that 36% fear that AI development may result in a nuclear-level catastrophe. Almost 28,000 people have signed on to anopen letterwritten by the Future of Life Institute, including Steve Wozniak, Elon Musk, the CEOs of several AI companies and many other prominent technologists, asking for a six-month pause or a moratorium on new advanced AI development.

As a public policy lawyer and also a researcher in consciousness (I have a part-time position at UC Santa Barbaras META Lab I share these strong concerns about the rapid development of AI, and I am a co-signer of the Future of Life open letter.

Why are we all so concerned? In short: AI development is going way too fast and its not being regulated.

The key issue is the profoundly rapid improvement in the new crop of advanced chatbots, or what are technically called large language models such as ChatGPT, Bard, Claude 2, and many others coming down the pike.

The pace of improvement in these AIs is truly impressive. This rapidaccelerationpromises to soon result in artificial general intelligence, which is defined as AI that is as good or better on almost anything a human can do.

When AGI arrives, possibly in the near future but possibly in a decade or more, AI will be able toimproveitself with no human intervention. It will do this in the same way that, for example, GooglesAlphaZeroAI learned in 2017 how to play chess better than even the very best human or other AI chess players in just nine hours from when it was first turned on. It achieved this feat by playing itself millions of times over.

In testing GPT-4, it performed better than90%of human test takers on the Uniform Bar Exam, a standardized test used to certify lawyers for practice in many states. That figure was up from just 10% in the previous GPT-3.5 version, which was trained on a smaller data set. They found similar improvements in dozens of other standardized tests.

Most of these tests are tests of reasoning, not of regurgitated knowledge. Reasoning is perhaps the hallmark of general intelligence so even todays AIs are showing significant signs of general intelligence.

This pace of change is why AI researcher Geoffrey Hinton, formerly with Google for a number of years,toldtheNew York Times: Look at how it was five years ago and how it is now. Take the difference and propagate it forwards. Thats scary.

In a mid-May Senate hearing on the potential of AI, Sam Altman, the head of OpenAI called regulation crucial. But Congress has done almost nothing on AI since then and the White House recently issued a letter applauding a purely voluntary approach adopted by the major AI development companies like Google and OpenAI.

A voluntary approach on regulating AI safety is like asking oil companies to voluntarily ensure their products keep us safe from climate change.

With the AI explosion underway now, and with artificial general intelligence perhaps very close, we may have just onechanceto get it right in terms of regulating AI to ensure its safe.

Im working with Hawaii state legislators to create a new Office of AI Safety and Regulation because the threat is so immediate that it requires significant and rapid action. Congress is working on AI safety issues but it seems that Congress is simply incapable of acting rapidly enough given the scale of this threat.

The new office would follow the precautionary principle in placing the burden on AI developers to demonstrate that their products are safe for Hawaii before they are allowed to be used in Hawaii. The current approach by regulators is to allow AI companies to simply release their products to the public, where theyre being adopted at record speed, with literally no proof of safety.

We cant afford to wait for Congress to act.

The new Hawaii office of AI Safety and Regulation would then take a risk-based approach to regulating various AI products. This means that the office staff, with public input, would assess the potential dangers of each AI product type and would impose regulations based on the potential risk. So less risky products would be subject to lighter regulation and more risky AI products would face more burdensome regulation.

My hope is that this approach will help to keep Hawaii safe from the more extreme dangers posed by AI which another recent open letter, signed by hundreds of AI industry leaders and academics, warned should be considered as dangerous as nuclear war or pandemics.

Hawaii can and should lead the way on a state-level approach to regulating these dangers. We cant afford to wait for Congress to act and it is all but certain that anything Congress adopts will be far too little and too late.

Sign Up

Sorry. That's an invalid e-mail.

Thanks! We'll send you a confirmation e-mail shortly.

Read the rest here:

Why Hawaii Should Take The Lead On Regulating Artificial ... - Honolulu Civil Beat

Artificial Intelligence (AI) Explained in Simple Terms – MUO – MakeUseOf

Artificial intelligence is all the rage nowadays, with its huge potential causing a stir in almost every industry. But fully understanding this complex technology can be tricky, especially if you're not well-versed in tech topics. So, let's break down artificial intelligence into its most simple terms. How does this technology work, and how is it being used today?

You may think of humanoid robots and super intelligent computers when the term "artificial intelligence" comes to mind. But today, that's not what this technology represents.

Artificial intelligence (AI) is a branch of computer science aiming to build machines capable of mimicking human intelligence. It involves creating algorithms that allow computers to learn from and make decisions or predictions based on data rather than following only explicitly programmed instructions.

Machine learning (ML), a subset of AI, involves systems that can "learn" from data. These algorithms improve their performance as the number of datasets they learn from increases. Deep learning, a further subset of machine learning, uses artificial neural networks to make decisions and predictions. It is designed to mimic how a human brain learns and makes decisions.

Natural language processing (NLP) is another important aspect of AI, dealing with the interaction between computers and humans using natural language. The ability of machines to understand, generate, and respond to human language is crucial for many AI applications, like virtual assistants and AI chatbots (more of which we'll discuss in a moment).

Artificial intelligence can be classified into two main types: narrow AI, which is designed to perform a narrow task (such as facial recognition or internet searches), and artificial general intelligence (AGI), which is an AI system with generalized human cognitive abilities so that it can outperform humans at most economically valuable work. AGI is sometimes referred to as strong AI.

However, despite many advancements, AI still does not possess the full spectrum of human cognitive abilities, and we are still far from achieving true artificial general intelligence. The current AI technologies are task-specific and cannot understand context outside of their specific programming.

Artificial intelligence is like teaching computers to learn just like humans. They do this by looking at lots of data or examples and then using that to make decisions or predictions.

Imagine you are learning to ride a bike. After falling a few times, you start to understand how to balance and pedal at the same time. That's how machine learning, a part of AI, works. It looks at a lot of data and then learns patterns from it. Another part of AI, called natural language processing, is similar to teaching computers to understand and speak human language.

But even with all this, computers still can't fully think or understand like humans, but this is likely to change in the future.

AI has potential and applications that stretch far beyond the tech realm alone.

Even if you're not big into tech, you've probably heard the name "ChatGPT" a few times. ChatGPT (short for Chat Generative Pre-transformer) is a generative AI chatbot. But this isn't like the chatbots you may have used in the past. ChatGPT uses artificial intelligence to process natural human language to fulfill your requests better.

ChatGPT's capabilities form a long list, including fact-checking, checking spelling and grammar, creating schedules, writing resumes, and even translating languages.

ChatGPT is far from the only generative AI chatbot, with alternatives including HuggingChat, Claude, and Google Bard. These services all differ in certain ways. Some are free, some are paid, some specialize in certain areas, while others are better with general tasks.

Data analysis is a key part of our world, whether in research, healthcare, business, or otherwise. Computers have been analyzing data for many years, but using artificial intelligence can take things to the next level.

AI systems can pick up on trends, patterns, and inconsistencies more effectively than a typical computer (or human, for that matter). For example, an AI system could more distinctly highlight less obvious user habits or preferences for social media platforms, allowing them to show more personalized advertisements.

When designing products, many elements must be considered. The cost of materials, how they're sourced, and how efficiently the product will perform are just a few factors that companies need to keep in mind, and this is where AI can help.

Because AI can learn and discover new things based on the information it is given, it can be used to carve out more cost-effective and sustainable materials and production practices for businesses. For instance, an AI system could list more eco-friendly materials that could be used in a product's battery given a comprehensive data set to work from.

AI-generated art took the world by storm in 2022, with products like DALL-E, NightCafe, and Midjourney hitting the heights of popularity. These nifty tools can take a text-based prompt and generate an art piece based on the request.

For example, if you typed "purple sunset on the moon" into DALL-E, chances are you'd get more than one result. Some art generators also let you pick a style for your generated image, such as vintage, hyperrealistic, or anime.

Some artists have pushed back against AI art generators, as they use pre-existing online art to create prompted pieces. This contributes to the theft of original art, an issue that already spans the web.

It's undoubtedly exciting to think about what AI could do for humanity and the planet in the future. AI is already being used to develop new medicines, highlight more sustainable business practices, and even make our day-to-day lives easier by performing mundane tasks like cooking or cleaning.

However, many think that the future of AI is dark and dystopian. It's no surprise that this is a common assumption, given how sci-fi books and films have created some scary stereotypes around AI and its possible consequences.

AI can indeed be abused or mishandled, but this is true for any technology. We've seen Wi-Fi, VPNs, email, and even flash drives exploited by cybercriminals to spread malware and push scams. But the worry is concentrated on artificial intelligence because of its capabilities.

In January 2023, an individual posted to a hacking forum claiming they had successfully created malware using ChatGPT. It wasn't highly complex or sophisticated malware, but the ability to create malicious code via an AI chatbot got people talking. If less advanced AI is being abused now, what will happen if super-intelligent computers are exploited in the future?

This is a valid question but is also tough to answer. At the moment, there are no AI systems that can think on the same level as a human. Many have predicted what such a machine would look like, but it's all hypothetical. While some think we'll create machines with human-level cognitive abilities in the next decade, others think it will take much longer.

While human-hunting robots may be the theme of many fictional pieces, this may never even come close to happening.

If AI is regulated correctly, its development and use could be controlled to prevent bad actors from getting their hands on highly advanced technology.

There are already a lot of discussions being had in the US and around the world about AI regulation. Some see this as a barrier, while others consider it a necessary precaution.

Licenses, laws, and general rules of thumb can all play a role in keeping AI out of the wrong hands. However, this will need to be done without restricting the development of and access to AI technology too tightly, as this could quickly become counterproductive.

Regardless of whether AI advances far beyond what it is today, it has undoubtedly transformed how computers can function. With this incredible technology, we can achieve some incredible feats, though no one knows what the future holds for humanity and artificial intelligence.

Read more from the original source:

Artificial Intelligence (AI) Explained in Simple Terms - MUO - MakeUseOf

The Pros and Cons of Artificial Intelligence (AI) – Fagen wasanni

When AI first emerged, there was a lot of enthusiasm about its potential to reduce labor-intensive work and increase efficiency. However, as with any technological advancement, AI has its positives and negatives. In recent years, prominent figures like Elon Musk and Sam Altman have expressed concerns about the potential threats AI poses.

Artificial intelligence, or AI, is the field of data science that enables machines to perform tasks that are typically done by humans using their intelligence. It involves developing computer systems or algorithms that analyze data, learn from it, and make decisions or predictions. AI techniques include machine learning, natural language processing, computer vision, and robotics.

There are two primary categories of AI: Narrow AI and AGI (Artificial General Intelligence). Narrow AI is designed to perform specific tasks, such as generating text or serving as voice assistants like Siri and Alexa. These systems excel at their designated tasks but lack general cognitive abilities. AGI, on the other hand, represents a more advanced version of AI that aims to imitate human learning and understanding across various tasks.

One of the main concerns about AI is its potential impact on unemployment. Some believe that AI has the capacity to replace existing jobs, while others argue that it will create new opportunities and enhance existing ones. Technological revolutions have historically led to job displacement, but they have also given rise to new and exciting career paths that require new skill sets. AI can assist and augment human capabilities, creating synergies that open up new possibilities.

While there are concerns about the potential risks of AI, research on AI continues because of its demonstrated potential to assist us in various areas, improving efficiency and solving complex problems. AI has already had a transformative impact in fields like healthcare, finance, transportation, and environmental conservation.

It is crucial to strike a balance between innovation and prudence when it comes to AI. The development of Artificial General Intelligence raises concerns about the potential consequences of machines surpassing human capabilities. However, with proper safeguards and a cautious approach, AI can serve as a transformative force for the betterment of humanity.

In conclusion, AI has its pros and cons. It has the potential to revolutionize industries and improve efficiency, but it also raises concerns about unemployment and the potential risks of advanced AI. It is important to embrace change, cultivate a growth mindset, and continuously learn new skills to thrive in an ever-changing job market.

Continue reading here:

The Pros and Cons of Artificial Intelligence (AI) - Fagen wasanni

Will "godlike AI" kill us all or unlock the secrets of the universe … – Salon

Since the release of ChatGPT last November, apocalyptic warnings that AGI, or artificial general intelligence, could destroy humanity have been all over the news. "AI poses 'risk of extinction,'" says "leaders from OpenAI, Google DeepMind, Anthropic, and other AI labs," the New York Times reported last May. The previous month, TIME magazine published an article by the leading "AI doomer," Eliezer Yudkowsky, who declared that "the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die." Similarly, an AI researcher named Connor Leahy told Christiane Amanpour in an interview for CNN that the prospect of AGI killing off the entire human population was "quite likely."

At the very same time, the prospect of "God-like AI" has also inspired a flurry of utopian proclamations. Tech billionaire Marc Andreessen claims that advanced AI will radically accelerate economic growth and job creation, leading to "heightened material prosperity across the planet." It will also enable us to "profoundly augment human intelligence," cure all diseases and build an interstellar civilization. The CEO of OpenAI, Sam Altman, echoes these promises, arguing that AGI will make space colonization possible, create "unlimited intelligence and energy" and ultimately produce "a world in which humanity flourishes to a degree that is probably impossible for any of us to fully visualize yet."

All of this might seem unprecedented. There are so many dire warnings of imminent extinction in the news right now, sometimes paired with equally wild predictions that a new era of radical abundance lies just around the corner. Surely something big is happening. Yet this isn't the first time that notable scientists and self-described "experts" have announced to the public that we're on the cusp of creating a magical new technology that will either annihilate humanity or usher in a utopian world of unfathomable wonders. We've been here before, and what happened? In every case, the outcome was much less sensational than people were led to believe. Often, the hype turned out to be a giant nothingburger.

To put the frenzied hype around AGI into historical perspective, let's revisit one such episode from the early 20th century. Understanding that history will demonstrate that what we're seeing now is nothing new.

It began with the discovery of radioactivity in 1896 by the French physicist Henri Becquerel. What is radioactivity? Let's start by imagining that you place a chunk of iron in direct sunlight for a few hours and then move it to a dark room. If you touch the iron right after moving it inside, it will feel pretty hot, right? But with each passing minute its temperature will drop, until it returns to room temperature.

This is simple enough: The iron rod absorbed energy from the sun and then re-radiated it in the form of thermal energy, which we experience as heat. Without the sunlight an external source of energy the temperature of the rod will equilibrate to the temperature of its environment.

Now let's imagine a different chunk of metal. We place it in a dark, cool room for several days, only to discover that it's actually radiating energy on its own. That's what Becquerel found: The metal called uranium will give off a slight glow even if it's kept in a dark room with no external source of energy. This glow can't be seen with the naked eye, but if you place the uranium next to a photographic plate, an image of it will appear even if the uranium has been stored in a pitch-black room for weeks at a time. How can it radiate energy without an external source?

Becquerel's observation didn't get much attention at first. That all changed after Marie Curie discovered that radium, another type of metal, also produced energy on its own but in much greater quantities. In fact, you can literally see radium glowing with the naked eye in a dark room. Curie coined the word "radioactivity" to denote this phenomenon, though she had no idea how or why it was happening. A metal that could produce its own internal energy at first seemed like a violation of the laws of physics.

An explanation finally came in 1901 from a pair of physicists, Frederick Soddy and Ernest Rutherford. Their discovery was mind-blowing: Some atoms in the radioactive metal spontaneously turned into atoms of a completely different kind of metal, and each time that happened, a small amount of energy was released. That's how these metals produce energy without an external source: Uranium atoms, one at a time, morph into atoms of a different metal, thorium, through a process called radioactive decay. Atoms of thorium, which is also radioactive, then decay into other types of atoms, including radium, until the entire clump becomes a "stable" that is,non-radioactive form of lead, the heavy metal formerly used in paint and gasoline. That ends the process of radioactive decay, which has produced energy from beginning to end.

What physicists Soddy and Rutherford realized was that nature itself is an alchemist, "transmuting" materials into other types of materials through the spontaneous process of radioactive decay.

In previous centuries, alchemists had tried to convert one type of metal into another, usually lead into gold, with a notable lack of success. What Soddy and Rutherford realized was that nature itself is an alchemist, "transmuting" materials into other types of materials through the spontaneous process of radioactive decay. Indeed, when Soddy realized what was going on, he shouted to his colleague: "Rutherford, this is transmutation!" Rutherford then shot back: "For Mike's sake, Soddy, don't call it transmutation. They'll have our heads off as alchemists." Alchemy had long since lost any respectability among professional scientists, and Rutherford didn't want to jeopardize their careers.

An even more significant discovery happened a year later, in 1902, when Soddy and Rutherford found that the amount of energy produced by radioactive decay was enormous not in "absolute" terms but "relative" to the size of the atoms. As historian Spencer Weart writes, the duo's research "showed that radioactivity released vastly more energy, atom for atom, than any other process known."

Exactly how much energy does radioactive decay produce? The answer is given by Albert Einstein's famous equation E=mc2, first published in a 1905 paper that introduced his "theory of special relativity."

That equation says two important things about the peculiar nature of our universe: First, it states that mass and energy are equivalent. They are "different manifestations of the same thing," as Einstein explained in a 1948 interview. No one at the time believed that mass and energy were clearly different types of phenomena, it was assumed, but Einstein showed that this commonsense intuitive idea was wrong.

Second, the equation states that small amounts of mass are equal to enormous amounts of energy. To calculate the amount of energy contained in some quantity of mass, you first square the "c," which stands for the speed of light (a very large number), and then multiply the resulting number the c2 by the amount of mass in question. The result is the amount of energy you get if that mass is converted into energy. In Einstein's words, the E=mc2 equation shows "that very small amounts of mass may be converted into very large amounts of energy."

This means that atoms contain a colossal storehouse of energy "atomic energy," as it was called at first, although "nuclear energy" is more common today. This atomic energy is what radioactive materials give off when they spontaneously decay: As the atoms of one type of metal transmute into atoms of another, they lose a little bit of mass, and this lost mass is converted into energy. That's how radioactive metals like uranium and radium produce their own internal energy, without any external source.

The implications of this extraordinary discovery were profound. If there were some way to extract, harness or liberate this great reservoir of atomic energy, then tiny amounts of mass could be used to power entire civilizations. Atomic energy could usher in a new era of endless abundance, a post-scarcity world in which the energy available to us would be virtually "inexhaustible." As Soddy declared in a popular book published in 1908,

A race which could transmute matter would have little need to earn its bread by the sweat of its brow. If we can judge from what our engineers accomplish with their comparatively restricted supplies of energy, such a race could transform a desert continent, thaw the frozen poles, and make the whole world one smiling Garden of Eden. Possibly they could explore the outer realms of space, emigrating to more favourable worlds as the superfluous to-day emigrate to more favourable continents.

Elsewhere he claimed that, by releasing the energy stored in atoms, "the future would bear as little relation to the past as the life of a dragonfly does to that of its aquatic prototype," and that "a pint bottle of uranium contained enough energy to drive an ocean liner from London to Sydney and back."

One prominent scientist prophesied that nuclear energy would "almost instantaneously change the face of the world ...the poor will be equal to the rich and there will be no more social problems."

Journalists ate all this up, raving about the transformative potential of atomic energy on the pages of leading newspapers and magazines. "When Rutherford and Soddy pointed out that radioactive forces might be the long-sought source of the sun's own energy," Weart writes, "the press took up the idea with relish. Instead of sustaining future civilization with solar steam boilers, perhaps scientists would create solar energy itself in a bottle!" One of the most prominent scientific voices of his day, Gustave Le Bon, prophesied that "the scientist who finds the means of economically releasing the forces contained in matter will almost instantaneously change the face of the world," adding that "the poor will be equal to the rich and there will be no more social problems."

By the 1920s, most people including many schoolchildren were familiar with the idea that atomic energy would revolutionize society. Some even predicted that controlled transmutation might produce gold as an accidental by-product, which could make people rich while solving all our energy woes. Exemplifying hopes that a Golden Age lay just ahead, Waldemar Kaempffert wrote in a 1934 New York Times article that although we couldn't yet unlock the storehouse of energy in atoms, a method would soon be discovered, and once that happened, "probably one building no larger than a small-town postoffice of our time will contain all the apparatus required to obtain enough atomic energy for the entire United States."

This was the utopian side of the hype around radioactivity. Yet just as sensational were the apocalyptic cries that the very same phenomenon could destroy the world and perhaps even the entire universe. In 1903, two years after discovering transmutation, Soddy described our planetary home as "a storehouse stuffed with explosives, inconceivably more powerful than any we know of, and possibly only awaiting a suitable detonator to cause the earth to revert to chaos." Le Bon worried about a device that, with the push of a button, could "blow up the whole earth." Similarly, in a 1904 book, scientist and historian William Cecil Dampier wrote that

it is conceivable that some means may one day be found for inducing radio-active change in elements which are not normally subject to it. Professor Rutherford has playfully suggested to [me] the disquieting idea that, could a proper detonator be discovered, an explosive wave of atomic disintegration might be started through all matter which would transmute the whole mass of the globe into helium or similar gases.

This is the idea of a planetary chain reaction: a process of contagious radioactivity, whereby the decay of one type of atom triggers the decay of other atoms in its vicinity, until the entire earth has been reduced to a ghostly puff of gas. Human civilization would be obliterated.

Some even linked this possibility with novae observed in the sky sudden bursts of light that dazzle the midnight firmament. What if these novae were actually the remnants of technological civilizations like ours, which had in fact discovered the dreaded "detonator" referenced by Rutherford? What if novae were, as one textbook put it, "brought about perhaps by the 'super-wisdom' [i.e., the technological capabilities] of the unlucky inhabitants themselves?"

This was not a fringe idea. Frdric Joliot-Curie, the son-in-law of Marie Curie, even mentioned it in his Nobel Prize speech, delivered in 1935 after he and his wife, Irne, discovered a way to cause radioactive decay to occur in otherwise non-radioactive materials, a phenomenon known as artificial radioactivity. "If such transmutations do succeed in spreading in matter," Joliot-Curiedeclared to his Nobel audience,

the enormous liberation of usable energy can be imagined. But, unfortunately, if the contagion spreads to all the elements of our planet, the consequences of unloosing such a cataclysm can only be viewed with apprehension. Astronomers sometimes observe that a star of medium magnitude increases suddenly in size; a star invisible to the naked eye may become very brilliant and visible without any telescope the appearance of a Nova. This sudden flaring up of the star is perhaps due to transmutations of an explosive character like those which our wandering imagination is perceiving now a process that the investigators will no doubt attempt to realize while taking, we hope, the necessary precautions.

At the extreme, some even reported to the public that "eminent scientists" thought this chain reaction of radioactive decay might spread throughout the universe as a whole, destroying not just our planet but the entire cosmos. By the 1930s, Weart notes, "even schoolchildren had heard about the risk of a runaway atomic experiment."

Or perhaps radioactivity would bring about a dystopian nightmare: As Rutherford liked to say, "Some fool in a laboratory might blow up the universe unawares" by triggering a planetary chain reaction.

These were the grandiose promises and existential fears associated with radioactivity. They were promulgated by leading scientists, amplified by the media and so widely discussed that even children became familiar with them. What lay ahead, people were told, was a utopian world of limitless energy in which all societal problems will be solved. Or, on the other hand, radioactivity could bring about a dystopian nightmare in which, as Rutherford liked to say, "some fool in a laboratory might blow up the universe unawares" by inadvertently triggering a planetary chain reaction through some artificial radioactivity process.

The parallels with the current hype around AGI are striking. Today, one finds prominent figures like Andreessen and Altman proclaiming that AGI could solve virtually all our problems, ushering in a utopian world of "heightened material prosperity across the planet," "unlimited intelligence and energy" and human flourishing "to a degree that is probably impossible for any of us to fully visualize yet."

At the same time, Altman notes that the worst-case outcome of AGI could be "lights-out for all of us," meaning total human extinction, caused not by a planetary chain reaction but by a different exponential process called "recursive self-improvement," which some believe could trigger an "intelligence explosion." These doomsday prophecies have been further amplified by AI researchers like Geoffrey Hinton and Yoshua Bengio, both of whom won the Turing Award, often called the "Nobel Prize of Computing."

Meanwhile, the media has lapped up all this hype, both utopian and apocalyptic, amplifying these warnings of existential doom while also declaring that AGI could revolutionize our world for the better.

Want a daily wrap-up of all the news and commentary Salon has to offer? Subscribe to our morning newsletter, Crash Course.

Historians of science and technology have seen this all before. The details were different, but the hype wasn't. If the past is any guide to the future, the push to create AGI by building ever-larger "language models" the systems that power ChatGPT and other chatbots will end up a giant nothingburger despite the grand proclamations all over the media.

Furthermore, there is another important parallel between radioactivity in the early 20th century and the current race to create AGI. This was pointed out to me by Beth Singler, an anthropologist who studies the links between AI and religion at the University of Zurich. She notes that just as the dangers of the everyday uses of radioactivity were ignored, the harmful everyday uses of AI are being ignored in public discourse in favor of the potential AI apocalypse.

Not long after Marie Curie wowed audiences at a major scientific conference in 1900 with vials of radium "so active that they glowed with a pearly light," a physician who studied radioactivity with Marie Curie, Sabin Arnold von Sochocky, realized that adding radium to paint caused the paint to glow in the dark. He co-founded a company that began to manufacture this paint, which was used to illuminate aircraft instruments, compasses and watches. It proved especially useful during World War I, when soldiers beganto fasten their pocket watches to their wrists and needed a way to see the time in the dark trenches to synchronize their movements.

Exposure to the gamma rays emitted by radium poses a radiological hazard, however, which very likely caused Sochocky's own death at age 45. Worse, as Singler points out, throughout the 1910s and 1920s many women who painted these watches in factories owned by Sochocky and others came down with radiation poisoning; some died and others became extremely ill. Some, such as Amelia Maggia, died after suffering a number of horrendous health complications. Several months after Maggia quit her dial-painting job, "her lower jawbone and the surrounding tissue had so deteriorated that her dentist lifted her entire mandible out of her mouth." She passed away shortly after that.

The victims of this new industry, the women poisoned or killed who were known as the"radium girls,"were collateral damage of the push to get rich off radioactivity.

The victims of this industry were called the "radium girls," as most factory workers were young women. They were the unwitting collateral damage of a push by Sochocky and others to get rich off the hype surrounding radium. In reality, the radium industry both generated huge profits and caused great harm, leaving many workers with devastating illnesses and killing many others.

Similar points can be made about the race to create AGI. Lost in the cacophony of grand promises and apocalyptic warnings are myriad harms affecting artists, writers, workers in the Global South and marginalized communities.

For example, in building systems like ChatGPT, OpenAI hired a company that paid Kenyan workers as little as $1.32 per hour to sift through some of the darkest corners of the web. This included "examples of violence, hate speech, and sexual abuse," leaving many workers traumatized and without proper mental health care. OpenAI also used, without permission, attribution or compensation, an enormous amount of material generated by human writers and artists, which has resulted in lawsuits for intellectual property theft that are now going to court. Meanwhile, AI systems like ChatGPT are already taking people's jobs, and some worry about widespread unemployment as OpenAI and other companies develop more advanced AI programs.

While some of this has been reported by the media, it hasn't received nearly as much coverage as the dire warnings that AGI is right around the corner, and that once it arrives, it may kill everyone on Earth. Just as the rush to cash in on radium destroyed people's lives, so too is the race to build AGI leaving a trail of damage and destruction.

The lesson here is twofold: First, we should be skeptical of claims that AGI will either bring about a utopian paradise or annihilate humanity, as scientists and crackpots alike have made identical claims in the past. And second, we must not overlook the many profound harms that AGI hype tends to obscure. If I had to guess, I'd say that AGI is the new radium, that the bubble will burst soon enough, and that companies like OpenAI will have achieved little more than hurting innocent people in the process.

Read more

from mile P. Torres on the AI revolution

View original post here:

Will "godlike AI" kill us all or unlock the secrets of the universe ... - Salon