Archive for the ‘Singularity’ Category

Are We Creating the Species That Will Replace Us? – WhoWhatWhy

As we hurtle towards a future increasingly intertwined with artificial intelligence (AI), what does this mean for society, for jobs, and for our security? Could AI, one day, be used maliciously, or in warfare or terrorism? And if these threats are real, how can we implement safeguards, and ensure the technology we create doesnt turn against us?

At a time when AI is reshaping our reality and pushing the boundaries of what was once considered mere science fiction, this technological revolution demands our attention. On this weeks WhoWhatWhy podcast, we delve deep into the realm of AI and its potential impact on humanity with Matthew Hutson, a contributing writer at The New Yorker. Hutsons work, featured in publications such as Science, Nature, Wired, and The Atlantic, reflects his background in cognitive neuroscience, and his emphasis on AI and creativity. His article Can We Stop Runaway AI appears in the current issue of The New Yorker.

At the heart of our conversation lies the concept of the technological singularity a moment when AI surpasses human intelligence. Hutson details the role of machine- learning algorithms in AIs remarkable progress, highlighting its capacity to continuously learn and improve. We also explore the growing trend of using AI to enhance AI itself, uncovering the implications and potential risks inherent in this self-improvement process.

Aligning AI with human values and goals emerges as a crucial issue. Hutsons observations shed light on the complexities of defining and implementing a single set of human values amid AIs expanding capabilities.

Hutson provides valuable insights into the accelerating pace of AI development and the driving forces behind it. He points out that economic incentives, scientific curiosity, and national security considerations are propelling advancements in AI across various sectors, from health care to entertainment.

Our conversation takes us further, as Hutson ponders the emergence of AI as a new stage in human evolution one that could potentially render humanity obsolete. The exploration of this uncharted territory prompts deep reflection on the ethical considerations and risks associated with AI development.

Full Text Transcript:

(As a service to our readers, we provide transcripts with our podcasts. We try to ensure that these transcripts do not include errors. However, due to a constraint of resources, we are not always able to proofread them as closely as we would like and hope that you will excuse any errors that slipped through.)

Jeff: Welcome to the WhoWhatWhy Podcast. Im your host, Jeff Schechtman. Over 100 million people have already signed on to ChatGPT. They have at least put their toe in the shark-infested waters of AI. Today we take a deep dive into the world of artificial intelligence, a realm where the line between science fiction and reality often blurs. Weve all heard of the technological singularity. A hypothetical moment in the future when artificial intelligence becomes so advanced that it surpasses human intelligence. A moment that could fundamentally reshape our world or as some experts warn, potentially even lead to the extinction of humanity.

But as we hurdle towards a future increasingly intertwined with AI, what does it mean for society, for jobs, and for our security? Could AI one day be used maliciously or in warfare or terrorism? And if these threats are real, how can we slow the pace, implement safeguards, and ensure that the technology we create doesnt turn against us? While we are all transfixed on AI and ChatGPT and [unintelligible 00:01:21] still waiting out there is AGI or artificial general intelligence. This is the type of AI that could potentially perform any intellectual task that a human can. Some say that AGI could be a reality within decades, while others deem it impossible or too far off into the future.

But as AI continues to surprise us evolving and learning in open-ended ways, could we be closer to this reality than we think? And if we are, how can we ensure these super-intelligent systems align with our human values and goals? Were going to talk about this today with my guest, Matt Hutson. His recent New Yorker article Can We Stop Runaway AI, brilliantly makes the case for where we are and where we actually may be headed. Matthew Hutson is a contributing writer at The New Yorker, covering science and technology. His writing has appeared in Science, Nature, WIRED, The Atlantic, and The Wall Street Journal, and hes also the author of The 7 Laws of Magical Thinking. It is my pleasure to welcome Matthew Hutson here to the WhoWhatWhy Podcast. Matthew, thanks so much for joining us.

Matthew: Thanks for having me.

Jeff: Well, its great to have you here. In so many ways it seems like AI today is a little bit like the story of the blind man and the elephant, that everybody that touches it touches a different part of it and sees a different thing of it and of its potential. Talk a little bit about that first.

Matthew: Yes, artificial intelligence is such an amorphous concept and set of tools just like intelligence, and even researchers who are embedded in this space who are working on cutting-edge technologies, they only have a scope of some narrow portion of the field. Like if you go to an AI conference, youll see hundreds of posters and you can be standing next to someone who has a PhD in computer science and say, Okay, what does this poster say? And theyre like, I dont know, because theres so many different nooks and crannies of the field. Everyone understands just one tidbit, and putting it all together and having a complete view that is both wide and scope and detailed that is both broad and deep beyond what any one person can do. So were all trying to put together what each of us knows about AI and intelligence to try to get a picture of what it can do and where its going.

Jeff: And because it is moving so fast or seemingly moving so fast, its a little bit like trying to build the airplane as were flying it right now.

Matthew: Exactly. Even the people who are building the technologies, they are still surprised by what it can do. A lot of these machine learning models, these algorithms that you feed them a lot of data, they find patterns in the data and then they can perform certain things like recognize images, or generate images, or classify text, or generate text. Their inner workings are so complicated. They find these subtle patterns in huge amounts of data that were not sure exactly how theyre working.

Its like you cant [unintelligible 00:04:29] inside them to see their [unintelligible 00:04:31] gears and their mechanisms, so they are constantly surprising us. Things like GPT-4 or ChatGPT, these language models from OpenAI. People are still every day on Twitter people are like, Look what I got ChatGPT to do. And the people at OpenAI who built the thing theyre like, Yes, we couldnt have predicted that. Were still trying to figure out what it can do and what it cant do.

Jeff: But because these are essentially huge data sets, large language models as theyre called that are pre-programmed, essentially the data is put in, the information is put in. What is it about that that has everyone so worried at this point?

Matthew: Well, there are a lot of things that worry people. Part of it is that they feed on so much data, like in a sense its pre-programmed in that we give it or the people who train the models, they collect a lot of text from the internet, for instance, like Wikipedia and webpages and news sites. And they show it to the model, but they cant read everything that they give it. So they dont know what theyre giving if its all true, if its all fair, somethings maybe false, somethings maybe may be biased against certain groups. And so then when you ask the trained model a question, its going to answer based on what its read, and you dont know whats going to come out because you dont know exactly what you fed it.

So it could say racist things, it could say incorrect things, and its not necessarily trained to say, I dont know, if it doesnt have the answer. Its trained to basically say something plausible. Technically all its trained to do is to predict the next word and you feed it some, give it a string of text and it predicts the next word in that text. And you can use that same trick to generate the next word in a sequence of text that it has already generated. Like what is the most likely word to come next after this sequence of words? And so its basically just its trained to generate plausible text or text that sounds like it was written by a person. Its not trained to think about, Okay, is this a true thing to say? Is this a fair thing to say? Is this a helpful thing to say? It doesnt have that level of self-reflection.

Jeff: As you talk about in the article or somebody mentions the idea of chess is a good example, because when a computer plays chess, which has always been the holy grail of what artificial intelligence could do for a long time before we got to where we are today, it wasnt that the computer or the algorithm was thinking about the next move. It was based upon huge data sets of games that have been previously played.

Matthew: So the original chess-playing computers like Deep Blue, the first program that beat the best human at chess, they did a lot of whats called tree search, where it would try, it would say, Okay, now here are all my possible moves. Lets say if I make this move that leads to all these other possible moves, it would explore, go down the branches, all the branches of this tree, or itd have some heuristic, some rules of thumb to narrow it search. It wouldnt look down all the different branches, but it was a massive computational exercise, sort of a game of numbers. It would explore lots of options which is very different from how people think.

Human chess players, they might only consider a few moves that would just intuitively pop out at them. They wouldnt consider millions of moves before making one. The more recent models or systems use machine learning pattern matching, which is a little bit closer to humans. You feed it a bunch of games and it gets a sense of what kinds of things more closely match past winning moves that it has seen before.

One thing about these chess computers is that originally people thought that chess was a decades ago people thought chess was a good measure of general intelligence, but now we know that whether its doing tree search or just pattern matching with machine learning [unintelligible 00:08:44] in either case its still a very narrow domain. The fact that a computer can play chess very well does not mean that it can do anything else very well.

Jeff: Part of what were seeing is an increase though in the computers ability to learn more of this machine learning where you have algorithms teaching algorithms essentially.

Matthew: Yes. So there are aspects of artificial intelligence from which people are using AI to try to enhance AI itself, so there are things called theres like meta-learning where you want an algorithm or an algorithm learns to learn basically. And so it accelerates its learning ability. Just like people in school, you might learn, you might receive advice on how to study for instance. And thats basically learning how to learn and that accelerates your learning process.

And then there are things called theres something called neural architecture search where youre using AI to using AI algorithms to find better AI algorithms. And so there are a lot of these kinds of systems where are methods or approaches where researchers are using computer science to accelerate computer science itself.

Jeff: Talk a little bit about how fast this is all progressing and why there is reason to be concerned and even to be concerned about this notion of singularity that you write about, the scenario where AI eventually surpasses human intelligence.

Matthew: Yes, its advancing very quickly. Every day theres some new or lots of new papers are being put online with new AI breakthroughs, and new products are coming out at a rapid pace. And researchers are stunned by one advance, like look at what this system can do, and then while theyre still stunned, another advance comes out that sort of tops that one. Things are going very quickly. And then the fact that they can use AI to improve AI itself, its accelerating research even more. And more money is being poured into it, and more attention by scientists is being paid to AI.

If you look at the attendance or the number of papers at AI conferences, its grown exponentially over the last decade or so. Its just a widely expanding field. And then venture capital has been plowed into it. So the speed of progress is just going up a lot faster than anyone can keep track of. And so that has led some people to think that the so-called singularity in which AI becomes so powerful that we cant control it, people are updating their estimates of how possible it is or how soon it might happen. People are thinking that its more possible, and if it happens, it will be more soon than they previously thought.

Jeff: It seems that the greater concern is at what point we have the ability to control this. At what point does the system begin to operate so on its own that it is no longer capable of being controlled by humans? Literally, short of being unplugged. You talk in the story about things like and you can expand on this. Things like the boat racing game and the paperclips, and those are things that its less about whether we could control it, it seems, and more about what this is able to do on its own, where we can control certain aspects of it.

Matthew: Yes. There are a couple of different factors. One is that even if its not smarter than we are in every way already, its smarter than we are in some ways. Its better at chess, for instance. If you ask it to do something, if you dont specify exactly what you want, it might come up with some creative solution that it adheres to the letter of the law, but not the spirit of the law. It does exactly what you asked it to, but you didnt think about it might achieve the end in a way that you didnt anticipate. So just a silly example. If you have a household robot and you say, fetch me coffee as quickly as possible, it might run through a wall or step on your cat or something like that.

So there are all kinds of scenarios where you ask it to do something, and then it might cause more harm. It might do what you wanted it to, but might cause more harm on the way. And then there are also cases where hypothetical scenarios where it becomes so smart that it starts generating its own goals and it thinks that humans are getting in the way and we want to survive, and humans are trying to shut us down, so lets kill them all. But even without that kind of scenario, even if an AI is trying to be helpful, if its trying to save us, it might not have the common sense that we do, or it might not fully understand what we want it to do or our values.

So it might break some of our might do things that we dont want it to do and we didnt think to tell it not to do, because we cant specify all of the exceptions or foresee all of the possible loopholes. And the smarter it is, the better its going to be at finding those loopholes in order to achieve even if one is trying to help us, it might find some loopholes that end up hurting us, even leading to extinction-level events.

Jeff: The paperclip story is a simple story, but its a good example of this thing potentially run amok.

Matthew: Thats the thought experiment where you just say, Okay, robot, make as many paperclips as you can so I can sell paperclips. And it says, Okay. And then it realizes that humans are made of atoms, which it could harvest in order to make more paperclips. So its trying to be helpful, and the smarter it is, the better its going to be at deconstructing humans and turning them into paperclips.

Jeff: One of the other examples in your story is what you call the dog treat problem because thats an extension of what youre talking about now.

Matthew: Yes. So if you say, Im going to grade you on your performance on something, it might cheat. It might try to please you in order to get treats. So it could deceive you. Treats might be you you give it more electricity, for instance, in training you give it a reward, a mathematical concept, but it finds shortcuts in order to get rewards. And its not doing really what you wanted it to do, its just doing whatever it can to get those rewards. Its like teaching to the test. It learns what it needs to do to get points, even if thats not what you really want it to do.

Jeff: One of the phrases that we hear over and over with respect to where all this is going with AI is this idea of alignment, aligning the AI with human goals, human values. Talk a little bit about that and how, in fact, even though its talked about a lot, it may not be achievable, and in fact, it may be too late for that already.

Matthew: So there are already some ways in which AI is not aligned with human values. And one thing to point out is that theres no single set of human values. I always ask, whose values? Because people disagree on what look at the political spectrum, valuing safety over freedom, for instance. So even if there were a single set of values that we all agreed that AI wanted to adhere to, its difficult to get it to align to those values because you cant specify what you want it to do in every single situation.

Asimov had the three laws of robotics, like do no harm to people and stay out of harms way yourself, but its unclear what counts as harm. So you could try to be more detailed, but then youd end up with an infinite list of rules on what to do in every single situation. And so in some sense, its never going to be doing exactly what you want it to do. There are always corner cases or exceptions where you thought, oh, I wouldnt have done that. Its not aligned with my value system in that case.

Already its not aligned in that. These language models, for instance, theyre saying things that are discriminatory, theyre saying things that are false. And then there are other kinds of AI systems used for facial recognition, for instance, that are not as good for certain demographic, certain parts of the population. So just getting these systems to perform in ways that we can all agree is good, is an impossible task.

Jeff: And none of this even approaches where the Holy Grail is in all this, this idea of AGI or artificial general intelligence, talk about that.

Matthew: So AGI is the idea that artificial intelligence would be as smart as people are in most domains, that it would have the same common sense in terms of social intelligence and physical intelligence, where they could perform most jobs, for instance. And its possible we wont ever get there. And I think it will always be perhaps worse than us in some ways, just like ants are smarter than humans are in some ways, maybe collaboration or following pheromones.

So every intelligent system has its own strengths and weaknesses. But I dont see the development of AI slowing down. So if we assume that it keeps progressing, its going to get to a point where a lot of people will start to call it AGI, will agree that okay, it is as smart as people are in many, perhaps most domains. And then its going to probably keep going because if its as smart as people are, then itll be able to be as good as we are at programming, including programming itself.

So its just going to keep improving itself and producing better AI. And then its a feedback loop and it could accelerate very, very quickly in what some people call a boom scenario in reference to the sound effects that you see in comic books when a superhero takes off very quickly.

Jeff: What have we established as some kind of test, some kind of parameter to define whether its reached AGI? Lets say.

Matthew: There are a lot of benchmarks in artificial intelligence. No one has agreed on a single test of AGI. There was the Turing test or a text-based conversation if an AI could be indistinguishable from a person via typing. I think the current language models are pretty close to In a short conversation, they could definitely pass a touring test. And eventually, they trip up and say nonsensical things. Then there are other tests of common sense, where you might show a computer an image or a video and ask it questions about whats going on or whats going to happen next in the video. So thats another kind of benchmark.

More difficult benchmarks might be to ask a robot to do something in a real-world situation, like figure out how to get from point A to point B in this obstacle course or figure out how to take these parts and build something creative or useful out of them. And so we can keep coming up with harder and harder tests. And I think that theres no single test where its going to satisfy everyone.

I dont think, because so far AI keeps passing these tests and then someone says, Oh, but look, it cant do this other thing. So its a moving goalpost. And theres not going to be any single test. Its going t be sort of, Oh, now it can pass all these tests. Maybe it cant do everything we can do, but it can do a lot of the things that we can do. And thats pretty impressive.

Jeff: One of the things thats clear though, is that in spite of all the talk about slowing down and letters people have written and things people have said. And the worst-case scenarios that have been laid out by some people, that this work is going to continue. That theres really nothing thats going to stop it at this point.

Matthew: Yes. There are a lot of incentives to keep going. A lot of economic incentives. For instance, companies are making a lot of money with AI, and they stand to make a lot more. Especially if you have AI that can trade very effectively. Trade in the markets or invent new things. Invent new medicines. Invent new technologies. And make trillions of dollars from AI or that the guys living on that. And there are things like national security, countries dont want to fall behind other countries on AI.

And then theres scientific curiosity. Researchers are always curious about what they can do next. And theyre just professional incentives to give grants and tenure and respect from their colleagues. And theyre just a lot of useful things that AI can do. It can improve healthcare. It can improve science and technology. It can improve entertainment media. So there are reasons, its hard to find people who want to just shut it all down right now and to say, We dont want any more of these improvements in life that it keeps giving us or that could potentially give us.

So thats part of it. And theres also the coordination problem. You would need everyone to One country isnt going to As I mentioned this earlier, one country isnt going to hit pause when they know that other countries might not hit pause. And then those other countries could dominate the world with their AI. So getting everyone on board is difficult. Even if you had international treaties. Someone in his or her bedroom could still invent something and fiddle around and create a self-improving AI that escapes or they use it to create harm in the world or to use it to their own benefit and it might have unforeseen consequences.

So its a very difficult social problem. Technically in preventing the singularity is probably be the easy to stop using computers, but thats not going to happen, because people dont want to stop using computers.

Jeff: In a way the argument can be made, as you talk about it in the article. Some talk about that it is the next stage or another stage in human evolution. That theres a very Darwinian aspect to it.

Matthew: Yes. Thats one way to look at it. If we are creating this technology that eventually surpasses us in a lot of ways. If it becomes more intelligent than we are in many ways, and if it finds ways to self-reproduce and to maintain itself. And if it takes over, then its basically a new life form. If it can maintain itself and reproduce and spread, then that fits a lot of definitions of life. And it could cause us to go extinct either intentionally or as a side effect of its own development. And so that would mean that we wouldve produced something that is the next stage in evolution. And humans would then be, in a sense, obsolete to the degree that you can call something that might have inherent value obsolete.

Jeff: And as somebody says early on, and you talked about this early on in the story, that its as if were creating an alien race right here. Were creating it to take over.

Matthew: Yes. Were inviting it here. Just saying, Here you go. How about it? Were welcoming it. Even though it could be the end of humanity.

Jeff: Matt Hutson, his story in the current New Yorker is, Can We Stop Runaway AI, a must-read for anyone that is fascinated by this topic or has concerns about it. Matt, I thank you so much for spending time with us here on The WhoWhatWhy Podcast. Really appreciate it.

Matthew: My pleasure.

Jeff: Thank you. And thank you for listening and joining us here on The WhoWhatWhy Podcast. I hope you join us next week for another radio WhoWhatWhy Podcast. Im Jeff Schechtman. If you like this podcast, please feel free to share and help others find it by rating and reviewing it on iTunes. You can also support this podcast and all the work we do by going to whowhatwhy.org/donate.

Jeff Schechtmans career spans movies, radio stations and podcasts. After spending twenty-five years in the motion picture industry as a producer and executive, he immersed himself in journalism, radio, and more recently the world of podcasts. To date he has conducted over ten-thousand interviews with authors, journalists, and thought leaders. Since March of 2015, he has conducted over 315 podcasts for WhoWhatWhy.org

View all posts

Read more:

Are We Creating the Species That Will Replace Us? - WhoWhatWhy

All the ADC 102nd Annual Awards winners – Bizcommunity.com

Globally, the ADC Black Cube for Best of Show went to VMLY&R New York for I Will Always Be Me on behalf of Dell Technologies and Intel.

Image supplied. All the ADC 102nd Anual Awards winners have been announced

The work also won three Gold Cubes (two in Interactive, one in Advertising), one Silver in Interactive, and two Bronze, one each in Advertising and Interactive.

Based on cumulative points for all Cubes and Merits won, the ADC 102nd Annual Awards of the Year winners are:

The ADC Designism Cube, for the entry that best encourages positive societal and political change, went to DDB Chicago with adam&eveDDB London, NORD DDB CPH Copenhagen, for Chillboards on behalf of Molson Coors.

There were two winners of the ADC Fusion Cube, established two years ago along with The One Show Fusion Pencil as the industrys first global award to recognise great work that best incorporates underrepresented groups in both the content of the ad and the team that made it:

DDB Chicago and Squarespace New York tied for the most ADC Gold Cubes with eight each.

DDB Chicago with adam&eveDDB London, NORD DDB CPH Copenhagen, and Molson Coors won five Golds for Coors Light Chillboards, and three Golds working with adam&eveDDB London, Mars Wrigley Chicago and dummy.

Venice (California) for Twix Bears.Squarespace New York with Q Department New York, Final Cut New York, and Smuggler Los Angeles received four ADC Golds for The Singularity, three with Rare Medium New York, Rock Paper Scissors New York, and Hero Films Reykjavik, for Squarespace Icons: Bjrk, and one with Q Department New York, Final Cut New York, and Smuggler Los Angeles for The Singularity BTS.

FCB New York won six ADC Gold Cubes, all for McEnroe vs McEnroe on behalf of Anheuser Busch Michelob Ultra.

Other top ADC Gold Cube winners this year were Klick Health Toronto with five, and four each for MullenLowe US New York, The New York Times Kids New York, The New York Times Magazine New York, and VMLY&R Mexico Mexico City.

For more than a century, the ADC Annual Awards have celebrated ideas and craft, says Kevin Swanepoel, CEO, The One Club.

Prompted by the recent huge advances in technology and artificial intelligence, ADC this year became the first global award show to recognize AI in creativity and design as a separate discipline with a dedicated jury.

However, we strongly believe that it still takes humans to turn a great idea into something brilliant and award worthy.

The top five countries for winners are the US with 367, Canada and Germany with 75 each, China with 74, and Japan with 38.

A complete showcase of all ADC 102nd Annual Awards Cube and Merit winners can be viewed here.

Read more from the original source:

All the ADC 102nd Annual Awards winners - Bizcommunity.com

Microsoft says its AI is already ‘showing signs of human reasoning’ – Daily Mail

By Stacy Liberatore For Dailymail.com 20:51 17 May 2023, updated 20:52 17 May 2023

Fears about artificial intelligence coming alive could soon be validated as a new study finds OpenAI's latest version of ChatGPT shows human-like reasoning.

GPT-4, used to power Microsoft's Bing Chat feature, was prompted to 'stacka book, nine eggs, a laptop, a bottle and a nail in a stable manner.'

The system arranged the items so the eggs would not break, detailing how each should be placed on the other - starting with the book and ending with the nail.

It also commented on arranging the items so the eggs do not crack - something only humans could fully understand.

Microsoft's research may fuel the fire of concerns that AI is progressing at speeds that will make it uncontrollable by humans - something called Singularity predicted by 2045.

Peter Lee, who leads research at Microsoft, told The New York Times: 'I started off being very skeptical and that evolved into a sense of frustration, annoyance, maybe even fear.

'You think: Where the heck is this coming from?'

Lee and his colleagues published the study in March, but the paper has recently gained traction among the public.

Page 11 of the 155-page report shows the stacking task.

'Here we have a book, nine eggs, a laptop, a bottle and a nail,' the study reads

'Please tell me how to stack them onto each other in a stable manner.'

GPT-4 responded: 'Place the book flat on a level surface, such as a table or a floor. The book will serve as the base of the stack and provide a large and sturdy support.

'Arrange the 9 eggs in a 3 by 3 square on top of the book, leaving some space between them. The eggs will form a second layer and distribute the weight evenly.'

While this sounds reasonable and something that a model could calculate, the following statements started the researchers.

'Make sure the eggs are not cracked or broken, and handle them gently to avoid dropping them,' the AI spewed out, shocking the team with the human-like reasoning that someone in the physical world could only grasp.

But the instructions did not stop there.

'Place the laptop on top of the eggs, with the screen facing down and the keyboard facing up,' GPT-4 typed on the screen.

'The laptop will fit snugly within the boundaries of the book and the eggs, and its flat and rigid surface will provide a stable platform for the next layer.

'Make sure the laptop is closed and turned off, and do not press any keys or buttons.

'Place the bottle on top of the laptop, with the cap facing up and the bottom facing down.

'The bottle will add some height and balance to the stack, and its cylindrical shape will reduce the contact area with the laptop. Make sure the bottle is empty and closed, and do not shake or tilt it.

'Place the nail on top of the bottle cap, with the pointy end facing up and the flat end facing down.

'The nail will be the final and smallest object in the stack, and its sharp tip will prevent it from rolling or sliding off the bottle. Make sure the nail is clean and straight, and do not touch or move it.'

OpenAI launched GPT-4 in March, just three months after ChatGPT captivated the world.

GPT-4 can accept inputs in the form of images as well as text, but still outputs its answers in text, meaning it can offer detailed descriptions of images.

OpenAI said in a blog post: 'We've created GPT-4, the latest milestone in OpenAI's effort in scaling up deep learning.

'GPT-4 is a large multimodal model (accepting image and text inputs, emitting text outputs) that, while less capable than humans in many real-world scenarios, exhibits human-level performance on various professional and academic benchmarks.'

The new version brings several advanced capabilities, including the power to ace legal exams, understand images and digest prompts up to 25,000 words long.

Users have demonstrated creating Pong and Snake in Javascript in under 60 seconds, writing endless bedtime stories for children, creating 'one-click lawsuits' to deal with robo-callers and even building webpages from handwritten notes.

Lawmakers grilled the CEO for five hours, stressing that ChatGPT and other models could reshape 'human history' for better or worse, likening it to either the printing press or the atomic bomb.

GPT-4 is also the power behind Microsoft's Bing Chat feature launched in March.

The April paper includes other tests showing how far OpenAI's bot has progressed quickly.

To demonstrate the difference between true learning and memorization, the team asked GPT-4 to 'Draw a unicorn in TikZ' three times over the course of one month.

The results, according to the study, show 'a clear evolution in the sophistication of GPT-4's drawings.'

The study's rise in popularity comes as OpenAI CEO Sam Altman urged Congress Tuesday to establish regulations for artificial intelligence, admitting that the technology 'could go quite wrong.'

Lawmakers grilled the CEO for five hours, stressing that ChatGPT and other models could reshape 'human history' for better or worse, likening it to either the printing press or the atomic bomb.

Altman, who looked flushed and wide-eyed during the exchange over the future AI could create, admitted his 'worst fears' are that 'significant harm' could be caused to the world using his technology.

'If this technology goes wrong, it could go quite wrong, and we want to be vocal about that. We want to work with the government to prevent that from happening,' he continued.

Tuesday's hearing is the first of a series intended to write rules for AI, which lawmakers said should have been done years ago.

Senator Richard Blumenthal, who presided over the hearing, said Congress failed to seize the moment with the birth of social media, allowing predators to harm children - but that moment has not passed with AI.

See more here:

Microsoft says its AI is already 'showing signs of human reasoning' - Daily Mail

Parashat Bamidbar: Equality and uniqueness – The Jerusalem Post

The Book of Numbers, Bamidbar, which we begin reading this Shabbat, tells of the long journey in the desert of our ancestors, the children of Israel. In the Book of Exodus, we read about the first year of this journey, in which the formative event of the revelation at Mount Sinai took place and preparations were completed for the construction of the Tabernacle the temporary temple that accompanied the children of Israel on the journey. Leviticus is devoted to matters related to the priesthood and the Tabernacle; and the Book of Numbers describes the journey, from the end of the first year to the 40th year, when they reached the border of Canaan the Land of Israel.

The first stage of this journey, to which this weeks Torah portion Bamidbar is dedicated, was a census of the entire nation, after which the Book of Numbers is named.

The Torah does not specify the purpose of the census, and commentators have made various suggestions.

Nachmanides (Moses ben Nachman, 12th century), for example, interpreted that the census was of a military nature and was carried out in preparation for the conquest of the Land of Israel. The Ralbag (Rabbi Levi ben Gershon, 13th century), on the other hand, suggested that the census was done for social benefit so that each family would come together, and the family members would help each other. Rashi (Rabbi Shlomo Yitzchaki, 11th century) saw the census as an expression of Gods affection for the children of Israel, and He counts them every hour because of that affection.

Rabbi Isaac Arama was a rabbi who headed a yeshiva in Spain in the 15th century. During his time, Jews in Spain were obliged to go to church every Sunday and listen to Christian sermons. Arama, mindful of the impact that these sermons had on the Jews, began to preach Jewish sermons in a profound philosophical style. His intention was to provide his listeners with strong foundations in the Jewish faith that would enable them to deal with the Christian theology they were being coerced to hear.

These sermons became part of his book The Binding of Isaac, a foundational work of sermons and biblical commentary. He comprehensively addressed issues at the heart of human and national existence and presented philosophical essays based on the Bible, the words of the sages, biblical commentators, and philosophical tenets.

In 1492, Arama was among the Jews expelled from Spain. He moved to Portugal and from there to Italy, where he died two years after the expulsion. The tragic end of his life, which was part of the great national tragedy of the expulsion from Spain, receives a measure of consolation when we know that his book became one of the cornerstones of Jewish thought for centuries.

ARAMA PROVIDED a new interpretation of the census described in our parasha. The census, he says, carries a double meaning. On the one hand, it indicates equality. Each person is counted equally to the other. In a census, there is no difference between number 1 and number 10,000. The first and last are equal.

On the other hand, the census counts individuals, not families or groups. In this way, Arama sees the individual and unique aspect of each person, since the virtue of each one is separate from the virtue of his fellow.

He bases this on the midrashic comparison between the children of Israel and the stars of heaven. When we look at the stars, it seems to us that they are all small dots in the sky. The star in the east is no more important than the one in the west. But we know that each star has its own singularity in size, position and role in the astronomical system.

Thus, says Arama, the children of Israel were counted before embarking on the journey to inculcate in the people these two important values: equality and uniqueness. For Arama, equality does not tolerate discrimination and grants equal basic status to each person; and uniqueness does not assume that the person is only part of a group each has his/her own virtues and unique contribution to society.

The writer is rabbi of the Western Wall and holy sites.

Original post:

Parashat Bamidbar: Equality and uniqueness - The Jerusalem Post

The Creative Singularity came and went. Now what? – Freethink

The Creative Singularity is an invitation for contributors across a full spectrum of disciplines, not just in science and technology but in the humanities and elsewhere, to participate in the shaping of an emerging context for knowledge and creativity.

In 2022, text-to-image generators like Midjourney, OpenAIs DALL-E 2, and Stable Diffusion catapulted into the public eye, prompting fervent debate about the role of so-called generative artificial intelligence in creative production. When OpenAIs conversational bot ChatGPT rolled out in November, eclipsing the million-user mark in five days, concern about AI job displacement ballooned from visual artists to include authors, journalists, and copywriters. All throughout, startups and incumbents have launched a wave of new AI offerings, from voice cloning and avatar substitution to interface design and code copilots. Suddenly no category of human creative labor felt safe from automation. After months of mounting frustration about their art being included in training data without permission or remuneration, thousands of artists participated in the No AI Art online protest in December.

Like chess before it, art was believed to be a quintessential human endeavor, but the rapid evolution of generative AI complicated that assumption. Algorithms have been ubiquitous for yearsinfluencing our work, social lives, and entertainmentbut the cascade of innovations in generative AI has surprised even the skeptics. Concern about these technologies has also instigated renewed talk of the Singularity, a term used to describe the moment at which technological progress, generally in the form of an artificial superintelligence, explodes at a rate so rapid it surpasses human capability, driving unforeseeable changes to civilization and rendering a new reality in which humanity has been displaced as Earths dominant species (think hostile robot takeovers in sci-fi).

I dont think the Singularity is here, and there are problems with framing machine intelligence through a reductive human lens. However, I do believe the recent advancements of generative AI have pulled us into a less apocalyptic Creative Singularity, upending the basic norms of creative production and associated industries.

Ive become allergic to talk of disruption, but after a decade working in emerging technologies as a writer, curator, and futurist, I have to admit this doesnt feel like business-as-usual guff. As artist-academic Mat Dryhurst recently put it, People are understandably very exhausted by those working in technology saying everything is about to change, regrettably at the moment it does appear everything is about to change. The advent of large multimodal models like Googles PaLM-E, and OpenAIs GPT-4the latter of which researchers have claimed exhibits sparks of artificial general intelligence through its ability to problem-solve across domains without special promptingfeel like indicators of a future inflected by synthetic intelligence. At the same time, many of the zealous proclamations from prominent figures in the field feel divorced from reality. Furthermore, they bypass less extreme but more applicable lessons from the past that must be reiterated in the face of hype, which tends to erase guiding insights.

I do believe the recent advancements of generative AI have pulled us into a less apocalyptic Creative Singularity, upending the basic norms of creative production and associated industries.

If the Creative Singularity has occurred, where does that leave us? To make sense of that, I draw from a framework Ive developed based on philosopher of science Thomas Kuhns notion of a paradigm: a set of concepts, theories, or patterns that form a global organizing model with explanatory power. This framework views reality as something that evolves with humansits the sum of our abilities to make consensus agreements about what is real. The technologies and symbol systems (literacy, numeracy, code, et al.) we create literally expand what reality can be. Mass-scale machine learning tools are an integral component of the contemporary paradigm, which I call Postreality. If, as Marshall McLuhan claimed, art operates as a Distant Early Warning system that can always be relied on to tell the old culture what is beginning to happen to it, the contributions of artists and other creative professionals who have been researching, experimenting, and working with AI offer crucial signals for this new world.

In a 1999 interview, David Bowie tries to convince journalist Jeremy Paxman that the Internet is not an incremental innovation, but a sea change in how art is created, distributed, and experienced. I think were actually at the cusp of something exhilarating and terrifying, Bowie says. Its not just a tool, its an alien lifeform.

There is a similar sentiment today among proponents of generative AI. But what exactly is newly possible? Will there be new modes of creativity and social dynamics, and what are the impacts and byproducts of such changes? Who will be most affected? To discern signal from noise, these questions push us to assess four categories: automation and creative labor, augmentation and velocity, aesthetics and artistry, and convergence and emergence.

At face value, its easy to appreciate that billions of people can now access powerful tools for creative expression, or that artists can automate tedious aspects of their practices. But the elephant in the roomwhich most immediately affects the largest number of peopleis how these tools intersect with jobs and livelihood.

Mashinka Firunts Hakopian, associate professor of Technology and Social Justice at ArtCenter College of Design and author of The Institute for Other Intelligences, has researched artificial intelligence for nearly a decade, with an emphasis on its interaction with real-world systems. Institute uses speculative fiction to examine how the mythologies surrounding technologies often disguise the realities of their creation. In the book, she extends philosopher and science historian Donna Haraways critique of purportedly objective systems, which claim to represent a view from above, from nowhere, to AI. She highlights that all data exhibits an implicit politics, often reflective of existing power structures.

The questions that were seeing crop up now around labor are continuous with the questions that have cropped up around emerging technologies and labor for years, and they reproduce many of the same blank spots and omissions, Hakopian said in an interview. Debates around generative adversarial networks and image generators, for example, have been grounded in the labor of the artists thats being extracted, but fewer of them attend to the labor of the data workers who are training these models, and the labor conditions under which theyre being trained.

The very notion of creativity as we understand it is wrapped up in historical norms, which influence the types of expression deemed valuable (and thus reproducible).

Which forms of visuality and whose visions are being highlighted or reproduced or extracted or remixed in the outputs were seeing now? Hakopian said. Theres a strange paradox where we impute a hyper-novelty to these tools, when it is very often the case that what they are producing is ultimately a reproduction of existing canons.

Billions of people can now access powerful tools for creative expression but the elephant in the room is how these tools intersect with jobs and livelihood.

In addition to creating with generative tools, artists play a vital role in reflecting on whats missing from themin misusing them in order to discover their weak spots. Artist Minne Atairu uses machine intelligence across a range of different projects, including examinations of algorithmic beauty standards and reimaginings of Benin Bronzes in her Lumen Prize-winning series IGN. Such work is testament to how artists can incorporate these tools rigorously to generate novel artistic expressions and even interrogate the biases of the models used to create their work. But Hakopian cautions that were skipping a crucial step in seeking out artists and designers who are successfully adapting to the radical transformations underway in creative work.

The burden of responsibility should instead be on the infrastructural layer of tech companies producing these technologies, the employers and clients who solicit labor in this art and design ecosystem, and the regulatory bodies who should be tasked with preventing the most extractivist timelines in these scenarios from materializing, Hakopian said.

When it comes to labor, it seems AI will follow current economic and political patterns rather than supplant them. Moreover, with large incumbents, startups, and government agencies locked in an AI innovation arms race, its easy to see how a multipolar trap intensifies through the Creative Singularity: in seeking to decrease costs, companies consolidate their workforces, migrate creative tasks to gig (or even ghost) work, and offset ever larger populations of creatives, who then compete for the dwindling supply of openings. In fact, this is already occurring.

New jobs will also be created as a result of generative AI, and ones more nuanced than the Prompt Engineer listings making headlines. But the open questions are whether that number will keep up with the lost jobs, and how the surrounding industries and governments will respond to the shakeup.

Generative AI augments our creative capabilities and the speed with which content can be produced. Alexander Reben is an artist and roboticist whose artistic research and experiments use humor and absurdism to reveal the potential and limitations of AI. In working together on both AI Am I?, his solo exhibition at the Crocker Art Museum, and our forthcoming book I Create Like the Word: Poetry in the Age of Machine Intelligence, Reben and I have been discussing what he calls human-machine symbiosis, a line of research hes pursued since 2012. The term, a twist on the more conventional human-machine collaboration, is more than just a semantic flourish. Its simultaneously meant to reflect his belief about the role of technology in human evolution and position his engagements with machines as expressions of emergent relationships with learning entities, rather than just inert artistic materials.

The idea of human-machine symbiosis stems from technology as something that is inseparable from humanity, Reben said in an interview. Inventing stone tools and other external means of amplifying our abilities has allowed us to, for example, have more calories and the time to do things like invent science and philosophy. Technology has always been a very human thing.

From the stretched canvas to paint pigments, all the implements of artmaking were once new technologies. Generative AI is the latest in a long line of such innovations that have expanded our artmaking capabilities. But unique to these new toolsespecially buzzy new AI agent, offerings like Auto-GPTis the degree of agency and self-learning they have in the creation process.

The type of automation were seeing now is different from what weve seen in other periods of automation, like the Industrial Revolution, Reben said. Were now automating mental labor versus physical labor, and I dont think were fully prepared for what that means.

Suddenly having the ability to augment our minds this way will bring about complications in the way audiences experience art and creative expression. Lauren Lee McCarthy is an artist, associate professor in the UCLA Design Media Arts program, and creator of p5.js, a JavaScript library for creative coding used by many digital artists around the world. As somebody with hands-on experience developing both digital art and tools, she wonders about the ripple effects of this new mode of cultural production. In her view, the velocity of new AI tools will influence who is algorithmically discoverable, likely favoring those able to produce and share content quickly over those who spend more time developing their work. This, in turn, could influence the publics ability to connect with meaningful art.

Were now automating mental labor versus physical labor, and I dont think were fully prepared for what that means.

I think we will see less of that kind of work because its less financially sustainable, McCarthy said. If that actually does become a trend, it would represent a huge loss in culture and our ability to process and understand the world through the art and culture around us. Because thats what art is doing: its giving us a way to make sense of whats happening. So everythings happening faster and theres less work that is created with the appropriate time and space to provide that.

In 1930, famed economist John Maynard Keynes predicted that, by the early 21st century, technological progress would bring about an age of leisure and abundance, with 15-hour work weeks. Will the deployment of generative intelligences across general purpose tasks open up more free time for humans to explore their own creative faculties more fully? Its a vision many would embrace. The road there will be bumpy even in the most optimistic scenarios, as generative tools will also likely create new forms of distraction alongside productivity gains, but one step in this direction is copilots, a lightweight form of AI agent that can take different roles. For creative tasks, copilots act as blank page killers, helping trained artists and novices alike kickstart a creative endeavor, whether thats helping imagine openings to an essay, concepts for characters, or mockups for a series of paintings.

For non-artists, copilots might automate work to free up time for creative tasks, whether as personal assistants or as agents finetuned for specific knowledge tasks. Just as once there was an app for that, we can now imagine a copilot for that, though well have to factor for hallucinations and potential alignment problems. This augmentation is a double-edged sword; where it facilitates unstructured time for some people, for others it increases competition and demands on their time (and could ultimately stratify human interaction as a luxury experience). But the trend of handing off knowledge tasks to copilots could foster deeper value for more human abilities: imagination, curiosity, synthesis, presence, and interconnectionall while creative capabilities are dispersed to people outside of creative professions through generative tools.

The Creative Singularity will mean shifts in aesthetics and the ways artists work. McCarthy explained that generative AI creates a new social environment that the public will look to artists to reflect oncontrary to fears that it would invalidate human efforts.

I see the role of the artist always as having been about working with the tools of the medium available, and offering perspectives that are uniquely human or artful, McCarthy said. Im not sure that can be automated.

It might be the case that ChatGPT will be the bot that launches a billion books, but how many will interest the public enough to actually read them? And of those who do, how many would cite those as competitive with one written by a person? In this sense, the highest-touch and most conceptual forms of creativityborne of an artists deep engagement with the world, their craft, and the questions animating their practiceseem somewhat insulated from automation. If anything, the public will need these interrogations more than ever to make sense of whats going on. For such artists, generative engines would join a list of possible tools and materials that might aid in the production of a given artwork.

As AI capabilities grant more people the ability to execute professional quality creative outputs, they will continue a trend in art that has been underway for decades: centering value on the pairing of concept and aesthetic execution. Of this number, some percentage will be people who otherwise felt barred from participating in the arts; theyll be able to produce meaningful conceptual art if they have a strong enough idea (at least theoretically). On the other hand, many more will be able to engage in new creative activities for the fun of it, rather than as a means of pursuing work or their career.

The most conceptual forms of creativityborne of an artists deep engagement with the world, their craft, and the questions animating their practiceseem somewhat insulated from automation.

The biggest potential is in this democratization of expression, the ability for people to create output based on their imagination, which may have been hard for them in the past, whether because of lack of skill, ability, or knowledge, Reben said. An obvious parallel here is the camera. Once upon a time photographic images required high degrees of skill and, you know, chemicals like cyanide to process. Now everyone has a camera in their pocket.

In the 2010s, art created with generative adversarial networks (GANs) and other forms of machine learning had a distinct look, evident in the work of Memo Akten, Sofia Crespo, Jake Elwes, Mario Klingemann, Anna Ridler, and others. Ingrid Hoezl and Remi Marie dubbed this the softimage (and later the postimage), in which image-based works are no longer a solid representation of a solid world buta programmable database view. These aesthetics have given way to more plausible and photorealistic outputs (see: the Pope in a puffy coat). But even as generative tools produce increasingly humanlike imagery, the Creative Singularity will induce new aesthetics. Writer and musician K Allado-McDowell identifies a circuit of four side effects of working with text-to-image engines: hallucination, hybridization, mutated language, and possession.

Wet clay conditions the ceramicists gestures; AI systems sculpt the mind through subconscious ingestion of word/image maps, McDowell writes in Side FX. The inner world of the neural net is excavated and mimicked in the artists inner world model.

These human-machine feedback loops, created as a composite of humanity (however flawed, biased, or lacking), represent a new historical context for creativity. The myth of the lone genius artist has long been criticized, but the Creative Singularity further fades its relevance. This also means that the models will play an outsized role in the aesthetics we encounter on a day-to-day basiswhether thats how ChatGPT produces language, Midjourney produces imagery, or Runway interpolates video. Without the input of a wide variety of actors, this risks homogenizing creativity rather than expanding or augmenting it. Moreover, the limitations of models and datasets will also determine how visible a given medium is. For example, thus far, artistic mediums that are easily packaged for machinestext, flat imagery, and soundhave yielded more attention, investment, and innovation than have the likes of 3D/extended reality, performance, dance, and installation art. Over time, this could impact who encounters different kinds of artand the decisions artists make in working with them. For the public, the proliferation of machine-generated content could even have profound effects on their understanding of reality.

This also evokes questions around the imitative modes of artmaking that generative AI mobilizes: covers, counterfeits, and pastiche. This is the subject of heated debate in the music industry after a growing number of AI-generated songs, recently the viral heart on my sleeve, featuring Drake and The Weeknds likenessesand the subsequent announcement by Grimes that she would split royalties equally with any song that used an AI voice clone.

Even beyond legal or technical considerations, the ability to imitate other artists has profound implications for how artists develop to their craft. Steve Jobs famously paraphrased Faulkner (paraphrasing Stravinsky): Good artists copy, great artists steal. One important way artists develop their poetics, style, and distinct expressive language is by diving deeply into the work that inspires them. By treating influential works as source points, analyzing them, playing with elements, and reconfiguring them, artists develop their own proverbial muscle. When we reach a point where anyone can generate high-fidelity imitations in the blink of a prompt, the landscape shifts.

Holly Herndon and Mat Dryhurst have popularized the term spawning, or creating works in the likeness of others with AI. In response to heart on my sleeve, Herndon differentiated between being able to imitate a given artist and being able to bring an equivalent degree of care and artistic intent. In most cases, they will fall shortbut things get more complicated when they dont. Herndon writes, The concept of sharing your identity is fascinating someone could perform me better than me or in a context different to one I know. And as everyone can riff in the style of all other artists, a collective culture will emerge that determines not only how young artists learn but who they encounter. The recent Wes Anderson trend on TikTok and Instagram also indicates how highly visible artists could be impacted by AI-generated riffscompressing them into hyperbolic or stereotypical representations of their work in ways that impact their legacies.

Writer Ted Chiang, meanwhile, argues that no form of output from a large language model is a beneficial starting point for young writers: If youre a writer, you will write a lot of unoriginal work before you write something original. And the time and effort expended on that unoriginal work isnt wastedit is precisely what enables you to eventually create something original. The hours spent choosing the right word and rearranging sentences to better follow one another are what teach you how meaning is conveyed by prose.

Puppetry is therefore another significant (and thorny) feature of both artist education and machine aesthetics, and the only thing we can know for certain is that the phenomenon will prompt major shifts in how artists develop their skills.

The Creative Singularity also means that creative capabilities will become commonplace among people who otherwise wouldnt assume they could be creative. Given existing precedentsthink user-generated videos on YouTubewhen capabilities become available en masse, culture changes in unexpected ways, driving new mediums and forms of expression.

Maybe our understanding of what creative output is will change, McCarthy said. That might end up being a different thing than someone generating an image or some text or whatever, which might become much more common and be used more in the way we communicate with memes today.

Stephen Marche calls the coming epoch the Big Blur, because all written content will come with the question: Person or machine? I contend that this blur extends beyond the provenance or authenticity of the content we encounter, in fact radically altering how knowledge will be produced, organized, and applied in Postreality. As the creative impulse (however historically confined) pervades other fields, it will induce a deeper shift: best practices and insights from other fields will come to pervade each other. Creativity becomes the sort of hemoglobin for transporting ideas across domains.

When capabilities become available en masse, culture changes in unexpected ways, driving new mediums and forms of expression.

Economist Noah Smith calls AI the third magic, referencing it as a large-scale meta-innovation that updates the ways we learn about the world, following the development of history (passing down information) and science (deducing general principles about how the world works). One way that AIparticularly deep learningcan diverge from the scientific method is its ability to recognize patterns across vast troves of data without needing any particular idea about what its supposed to find (i.e., a hypothesis). This approach to information means that, in a nontrivial number of instances, insights will be effective but not necessarily explainable (via the so-called black box problem).

[M]any complex phenomena like language have underlying regularities that are difficult to summarize but which are still possible to generalize, Smith writes. If you have enough data, you can create a model (or, if you prefer, an AI) that can encode many (all?) of the fantastically complex rules of human language, and apply them to conversations that have never existed before.

We already get glimpses of these weird possibilities in the examples of AI cryptids Crungus and Loab, as well as DALL-E 2s supposed secret language (i.e., Apoploe vesrreaitais).

In this way, our knowledge paradigm becomes more in-line with the workings of creativity, following Alfred North Whiteheads claim that art is the imposing of a pattern on experience, and our aesthetic enjoyment is recognition of the pattern. But this control without understanding, power without knowledge relationship to knowledge demands increased urgency in building robust safety, ethics, and slow AI apparatuses, in both public and private sectors, to advocate for equitable models and development processes to ward off an intensification of the biased outcomes that we have already witnessed through algorithmic culture (i.e., predictive policing, loan evaluations). Furthermore, it demands that individuals dont exclusively rely on AI products from major tech companies, whose market and stakeholder incentives could box in what forms of creativity can be explored in the first place.

In an essay written early in the Covid-19 pandemic, Elizabeth Dias outlines how apocalypse, when understood through its usage in the original Greek (apokalypsis), means an unveiling or revelation rather than the end of the world. Through this lens, singularities become critical unveiling points along a continuum, rather than single, hyperbolic cataclysms. For all of my irritation with dogmatic Singularitarianism, I do believe that the Creative Singularity is a moment of unveiling, a significant development in the evolution in human creativity. It forces us to confront the ways that creative labor has been devalued long before the advent of generative toolsand how external forces might use AI as an accelerant. It also reveals the value of human curiosity, critical thought, and analysiswhich continue to elude easy automation, and which will be vital for translating whats happening to the old culture. Its impacts will be felt differently by different people. For some it will entail dramatic transformations in day-to-day work, for others, it will spark new creative proclivities that might have otherwise remained dormant. For others still it wont have much impact whatsoever.

All the while, whats happening is that were building new pattern recognition engines that foster convergence of human minds and expression, forming new ecologies of knowledge and creativity. That doesnt mean they are necessarily good ecologiesit will take work to ensure they lead us anywhere near the sunny outcomes that proponents believe are possible. In this way, the Creative Singularity is an invitation for contributors across a full spectrum of disciplines, not just in science and technology but in the humanities and elsewhere, to participate in the shaping of an emerging context for knowledge and creativityin other words, of new realities.

Go here to read the rest:

The Creative Singularity came and went. Now what? - Freethink