HT Ganzo/iStock via Getty Images
Listen to the podcast above or on the go via Apple Podcasts or Spotify.
Recorded on June 1, 2023
Check out Kirk Spano's Investing Group, Margin of Safety Investing
Follow Ramy Taraboulsi, CFA
Kirk Spano: Hello. I'm Kirk Spano with Seeking Alpha. And today, I am interviewing Ramy Taraboulsi, who wrote an article recently, describing how the singularity, the merger of humanity with machines and artificial intelligence, and all the consequences, benefits, all the negatives that could come from that.
It was maybe my favorite article that I've read this year on Seeking Alpha. So I do recommend that everybody read this article, take all the links that are in it and go and visit some of the links, and really consider where we are in history and whether or not it's accelerating as fast as Ramy suggests that it is.
Ramy, how are you doing today?
Ramy Taraboulsi: I'm doing perfectly fine, Kirk. Thank you for inviting me to that conversation. I really appreciate that. I am currently in Hyderabad, India. So - and I'm originally residing in Toronto, Canada, but I'm on a trip to Hyderabad right now. So interesting how the technology right now has taken us. You're currently in the United States and I'm in Hyderabad, India, and we're talking to each other as if we are next door to each other practically.
KS: I ran a string across the ocean. So, we could talk. Yes, it is kind of amazing. I remember early in my career talking to people in Europe or Southeast Asia or India or wherever we are talking and the telephone connection would crackle or we'd have that split second echo where we had like pause to hear what was coming back over and it's pretty amazing to me that this is so easy right now. As I told you off air and we'll get back into this conversation.
Way back in the early 90s, when I was finishing up college, I wrote a paper about, maybe I'll get to see all of the things that are happening now in my lifetime. I drew heavily from Lewis Thomas who had written about genetics way back in the 1970s and I read your article and it just brought a lot of that back. Why don't we get started here and just describe in your own words and thoughts, what is the singularity?
RT: If you ask 10 different people what is singularity is, most likely you'll get eight different answers, most likely.
KS: That's better than asking 10 economists, because then you'd get 12 answers.
RT: Yes, I guess so. I guess so. If you look at what Ray Kurzweil has said, the singularity is basically the interconnection between three key areas of technology, which are nanotechnology, genetics and artificial intelligence. When these three areas reach a certain point where they can interact with each other and produce a particular entity that is superior to the human being, we'll get what we call the artificial super-intelligence or artificial general intelligence, where a machine is capable of doing things that the human can do.
And when we reach that level generically, you'll find out at that point that we don't know what will happen. Why did we call it singularity, because it comes originally from the concept of a black hole. All the mathematical rules, all the physics rules fail at the point of the singularity which is in the center of the black hole. After you pass the event horizon, how do things operate? Some physicists think that they have some theories, but these theories the mathematics behind it fails.
What will happen at the singularity when we have these three areas of technology merging together? That's what people don't know. And that's why we called it a singularity, because we don't know what will happen in there. And whatever we're saying, the only thing I can tell you is that it might be correct, it might not be correct. And whoever says that they know what will happen, they don't know. So did I give you an answer to that one?
KS: Yes, I think that everybody has ideas about what happens. And my name is Kirk. Yes, it's not taken from Star Trek, but I became a huge Star Trek fan. And if you've watched all the shows and all the movies from Star Trek, they explore this idea a number of times. And we see the negative things that could happen, the Borg, the Borg try to create the singularity the way that they want to and it becomes oppressive.
You have other societies, maybe the Vulcans, who are looking for it and it ends up lacking emotion. And then there's other incarnations and ultimately you have the Utopian one, where we could put it altogether well and it allows us to advance humanity without sacrificing the things that make us human. I'm optimistic that we can pull that off over a few generations. However, my fear and I tell my subscribers and clients this all the time, my fear is that we blow it up in the meantime, and kind of thinking Planet of the Apes, right?
I cite science fiction all the time, because science fiction, Jules Verne, Carl Sagan, you go back and you'll take a look at some of the things that have been in science fiction, decades and decades ahead of reality and a lot of it comes true. So, we have control over this at this point. How do we get to a place that's better and not worse?
RT: That's a very difficult proposition, how to get to a place that's better and not worse. There's a big potential that we can reach a Utopian state like you're suggesting and that's my big hope. We can do that. Some people are suggesting that we have to slow the AI down. We cannot do that. We cannot slow it down. When you think
KS: Why can't we?
RT: The reason for that is that, there's a huge race that's happening right now. From my perspective, I see many companies that are advancing in AI. Think about NVIDIA (NASDAQ:NVDA), for example, it's doing lots of things on AI, OpenAI, Microsoft (NASDAQ:MSFT), and so on. I think personally that the investments of these companies in AI build compared to the investment of the military around the world in AI.
I want you to think about something. The United States, for example. It has a budget of around $800 billion for its military, which is as much as a 10 next countries combined.
KS: Right.
RT: But the number of soldiers in the United States has been dropping by around 5% over the last 10 years every year, year-over-year dropping by around 5%, and the budget is going up. So is it the soldiers that are making more money, or they're investing in something that we don't know? I just wanted to go to Lockheed Martin Company (LMT), for example, which is one of the biggest contractors and look at their motto. Their motto and their case theme for what they're doing, they're trying to automate everything.
And how will they automate it? They'll automate with AI. So the military is spending huge amounts of money, and I don't think that the military will be in a position to stop its progress of fear of other militaries doing that. So, I don't think that stopping it will be a possibility anytime soon, primarily because of this. Yes, you can stop the companies, but you cannot stop the military.
KS: Right. Well, and Eisenhower warned us about this in his farewell speech when he said beware and be careful of the military industrial complex. And while we certainly want a military and to feel safe, at what point does the military make us less safe? You know, that's something explored in fiction all the time, right? The military
RT: It is, it is.
KS: takes an idea that could be good and they turn it into war. Is there a spiral that we could, I mean, that's the thing I worry about, right? I just said that a minute ago. I do worry that we have that spiral. What do you think we can do to prevent that?
RT: Well, think about the following. Let's go back to the human beings from basics. You take one person on their own, how much can they progress? Very limited. You take a computer on its own, how much can it progress? Very limited. There is something called APIs, which is a way for computers to communicate with each other. I don't think that we can stop the program of AI in general. But what we can do, we can impose certain controls for how the computers communicate with each other. That's one thing that we can do.
And if we impose such a control on how the computers communicate with each other, we can control the amazing, incredible speed by which AI is progressing. It's progressing faster than anyone can manage right now. And the only way that I personally think that we can control it is through controlling the way that computers communicate with each other. How can we control this item? But I don't see that we can stop people from creating new neural networks or stopping the research on that particular area, that's not possible. Can we impose control on the communication on the APIs?
I think that it's more feasible to do something like this? How to do it? I don't know. Some technical experts might be in a better position to do something like this or maybe we need a brainstorming session to discuss how we can control the APIs between computers that are AI-driven. I think that this is the only way that I can think of the way we can control it.
And actually, you'll be surprised, Kirk, but I have not heard anyone talking about that as a prospect of controlling AI. Have you heard of that before?
KS: I've heard the discussions, particularly I've been paying attention to Europe because I think that they usually are pretty close to a good idea and almost everything when it comes to kind of social aspects of regulation. I don't know that I've heard that controlling the way that they communicate through the APIs, but I have - heard of controlling the dataset. So, if you control the dataset, you can teach the AI in a better way.
One of the things that I've worried about is the AIs that are out there and the data that they're scraping from the Internet, some of that data is just factually wrong, which lends itself to the hallucinations that AI has. And that's a - I don't know if everybody knows that term, but AI hallucinates, because it gets bad data and it doesn't know what to do with it and it spits out a bad answer.
RT: Yes.
KS: To give an example, I play in the World Series of Poker and I'm actually going to be leaving in a couple of days. And I asked ChatGPT a bunch of statistical questions, and I knew the answers going in. Unless I phrased the question just right with the right amount of detail, it gave me like six wrong answers in a row. And it became a challenge for me to ask the question in a way that it could access the correct data to give me the right answer. And it just kept spitting out bad answers until I kept amending the question, which I've learned in life.
I think the hardest thing to do, when you're trying to figure something out is ask the right questions, so you get the relevant answers. So I'd be curious, if the regulatory bodies can get ahead of this, which is almost never the case, they're almost always behind. And they're behind on cryptocurrency, they're behind on - they're probably behind on technology issues from 20 years ago. Certainly, I think they're struggling with the issues of genetics. I wonder what will they do with Neuralink when Neuralink works, because it's going to eventually?
RT: But I hope it works. I hope it works. The first thing that they are targeting right now is spinal cord injuries.
KS: Right.
RT: And if it works, it will be a huge blessing. That's an example of how AI can actually help us.
KS: Right.
RT: With Neuralink, for example, they put the implant in your brain through Bluetooth, it will communicate to a computer or a phone. And this phone will be adjust - connected to a motor or some sort of electrochemical signal that will send signals to your muscles that your muscles can move. And that will be trained through the AI.
KS: Right.
RT: So something like this can solve one of the biggest problems, which is spinal cord injuries, which we cannot solve medically right now. So, I hope it will work. But at the same time, we're talking here about receiving data from the brain. What about and putting data into the brain?
KS: There you go, that's where I was going to go.
RT: You can get data. If you can get data, why not put data in?
KS: Right.
RT: And if you put data in the brain, how can you control that? Will we get to the point where we have telepathy among the people? Possibly, that's a positive part or maybe another part will be that someone will be controlling another person through these implant?
KS: Make somebody pick up a tool?
RT: For example, it's a little bit farfetched, but that's a possibility. Fast enough, it will be a possibility. Like Elon Musk said once, he mentioned - well, he was talking about something else. But just imagine 45 years ago, the first computer game that ever came was Pong. Remember that game.
KS: Yes, all right. Thank you for that?
RT: 45 year ago, that's 45 years ago - see how much it progressed to the games that we have right now. Just imagine another 40 years or another 45 years where would we be?
KS: Right.
RT: From Pong to where we are right now.
KS: Right.
RT: From where we are right now another 45 years? And imagine the progress that we had over the 45 years mostly happened over the last five to 10 years, that's it. The curve went up like this, exponentially, in terms of the progress.
KS: Right.
RT: And this exponential growth is not expected to abate by any means. The difference in what we're experiencing right now compared to other industrial revolutions is that the other industrial revolutions, the machines were not improving themselves. They required us who are limited to improve the machines. Right now, you can have a neural network that creates another neural network.
KS: Right.
RT: A neural network, creating it up effectively, it is becoming a species right now. Because the definition of species is that it can procreate and its procreation is the same image. A neural network is creating another neural network in its same image. That's a species that we have right now, at least following the definition of the species. So what will happen after that? Kirk, your question is not easy to answer.
KS: What, the women in my life have always told me that I'm simple?
RT: And I'm sure that they know better than me.
KS: So there's a lot to unpack there. One of my first mentors on technical trading and quantitative trading was a guy named Murray Ruggiero. And he was a legitimate rocket scientist who decided to start building neural networks, I believe in the 1990s for the financial industry. And I learned a lot from him. I have a very intermittent contact, so I say mentor, it's very loose. But I learned a lot from him early in my career. I was lucky to get introduced to him in the early 2000s and then I worked with another entity, another financial outfit that we bumped into each other in like 2016 or something.
And I bumped into him again out in New York at a traders conference. Those neural networks, building them seems like rocket science to everybody, right? But once it's done and the AI learns how to do it, now all of a sudden, I think it becomes a question of making sure that the AI doesn't create something evil for lack of a better word, right, and keeps it in its lane. Most AIs are task-driven, correct? They're not the super-intelligence. So, we're still a level away...
RT: We're not there yet.
KS: from Skype app and things like that. So where do you think we are and I'll frame this with a conversation I've had with my subscribers probably 50 times now. When I went to CES, Consumer Electronics Show in 2020, a lot of the things that are just getting invested in now, the AI hype, that was a big theme three years ago and now it's an investment.
What is the evolution and the speed that you're seeing to go from the generative AI that we have now and how it solves various technological problems like with energy control, controlling the grid, things like that. How do we go from where we are now to the things that people are doubting are going to happen in the next five years with decarbonization or pick a topic to the super intelligence. Do you really think that can happen in a decade?
RT: I think it can happen in a decade, but there's one big problem that needs to be resolved first.
KS: Okay.
RT: People need to understand how the neural network operates. If people think about neural network, what is a neural network? A neural network is simply, I'll just talk technical a little bit right now. It's simply an approximation of a nonlinear multivariate regression problem. It's a regression problem.
KS: That sounds like something I got wrong in calculus.
RT: It's statistics, yes. And most people get it wrong. It's a nonlinear multivariate regression, the problems that if you want to solve it using the traditional methods, you don't have enough time in the universe to solve such problems. So what do we do? We create neural network to approximate such a solution. Using something like stochastic gradient descent and backpropagation, all this crazy stuff, but it's an approximation. The problem with this approximation is that it comes up with values to the parameters of that particular regression problem.
These parameters are basically what we call the training of a neural network. The problem that people have right now is that if a network has, let's say, 1,000 hidden layers, which is typical for neural networks right now. People don't understand these parameters that are out there, which could be in the tens of thousands. What each one means? So, when the neural network comes up with an answer, people don't understand where this answer is coming from. They don't know how the computer has come up with this answer. That's what the problem is.
Until the scientists understand what they have created, it would be very hard to take it and further enhancing it. The only way that people are enhancing neural networks right now, which is a core of artificial intelligence and rate of artificial intelligence in general, the only way that they do it is that they do it by trial and error. They try certain things. If it works, that's fine. If they don't try it, they use another activation function, they use another set of parameters or neural architecture and so on. They try different things, so that they can get the proper answer that they're expecting, based on a training set and the testing set.
People don't understand what they have created. That's the problem with AI right now. People don't understand it. And the interesting thing about it, although they don't understand it, it's working right. It's giving us answers that we're expecting. We're getting the answers for something that we do not understand. And I challenge right now any computer scientist out there who's listening to this tell me how the parameters for the neural network are set what each parameter means.
You have the neural network for 1,000 nodes. How can you figure that out? They don't know. No one knows. And the researchers are trying to solve that problem and they cannot solve that problem. Once that problem is solved, then we'll have a better understanding of how to take these neural networks and drive them to something that will be beneficial for the humanity as you're suggesting. Until then, we're in the trial and error phase right now. That's where we are right now.
Right now, the whole AI is trial and error, nothing else. All the research of AI is simply trial and error, and people don't understand that. They think that the researchers out there who know what they are doing, they are not. People are just doing trial and error right now. And that is a problem because we're building something that we don't know.
KS: Right.
RT: We don't understand how it works. So, can we reach the point where we can actually get to the Utopian state that you're talking about, where it can control the grid, and make sure that it only generates enough electricity, so that the grid does not overflow and people don't have blackouts, that's very interesting problem. Is there a solution for it? Yes. I would say that the solution for it would be more on the quantum computing side, rather than artificial intelligence. There are other things as well that, because it requires lots of processing power and so on.
There are other things that would be more suitable to artificial intelligence, which are more on the services side. And I see that there are huge potentials in there, but I see also there are huge risks as well. So you're hoping for the Utopian state. I'm hoping for the Utopian state. You're more optimistic than I am, Kirk. I don't trust the humanity that much. I don't trust myself that much as a matter of fact.
KS: I did a podcast the other day and I just told everybody, Hey, make me the Grand Emperor, and I'll take care of everything for you. It'll all work out. I'm that smart. I'm smarter than everybody else. I'm just great. I understand - it's like a ride. It's like a new ride at an amusement park and it hasn't gone through testing yet, and you're the first one on, so...
RT: Yes.
KS: You know?
RT: That's scary, man, that's scary.
KS: This is going to come off the rails, but we haven't run it yet. So yes. So when we translate this to, let's shrink this down to a five to 10-year investment horizon. So that people can and try to look at these things in the nonlinear way and I talk about straight lines and exponential curves all the time, because on the front end of any progression, it looks like a straight line, because it's kind of flat. And then you notice that first inflection point, like, oh, it's kind of ramping up. And then like the AI stocks in the last month, they go straight up.
And straight up moves usually aren't sustainable without some sort of significant snapback. So, I wonder for these companies, are they looking at such a big move in technology that they have a hard time applying it in a way that is profitable. All the trial and error ends up costing them a lot of money. And then what are the ramifications with management, right? They get pressure from shareholders. Does that create mistakes? I would be concerned about different levels of mistakes, not so much on the scientific side, because that's really a process?
I was - I thought I was going to be a math and science major until I realized that there are people out there like Neo and the Matrix that can pull the numerical bullets out of the air and I couldn't do that. I had to work too hard to catch up to them. So I'm probably overqualified for what I do, but I couldn't launch a new giant rocket ship that was a mile away from getting into orbit.
So, I just wonder where do you see the hang ups on the corporate side? I think we all think about the government side and the military side for sure. But at the corporate level, where do they play a role in all of this?
RT: Well, the corporations are competing with each other, of course. We know that and this competition is brutal. And every company is trying to get and edge over the other companies. Now how will they take that particular thing that they have and materialize it into money? Thats a totally different issue and every company is totally different.
The challenge that I'm seeing right now from an investment side is that we are going through a hype state, and people do not understand what AI is. The problem that I'm seeing right now is that people really don't understand the internals of what AI is, but they know that they are using it.
KS: Right.
RT: How can they take what they are using right now and what will happen in the future? What are the potential of habit, what will happen in the future? Now think about the following right now. How much could the computer power increase over the years? I just did some simple calculation and found out that over six years, the computer power that we have I'm talking about hardware, connectivity, disk, and so on will increase by around a quarter of a million times over six years.
KS: Wow.
RT: So, we're having quarter of a million time improvement in the power of the computing, computing power altogether worldwide over a quarter of million years. The major bottleneck
KS: Let me jump in and that's probably going to accelerate with the recent quantum computer breakthroughs?
RT: Yes, that does not take into - the quantum computer into consideration. But we have to remember as well that quantum computers do not work on their own. Quantum computers is not replacement for the traditional computers.
Quantum computers gives us all the answers for a problem. And then we need the traditional computer to sift through them and get us a proper answer. So quantum computers don't work on their own, but that's a different problem.
The challenge that people are not realizing right now is that the major problem with AI is the lack of computing power. That because AI requires supercomputers for the training and testing of data. And so remember, it's all based on trial and error. So it has to go through multiple iterations to get something right. And most of these iterations are not done scientifically as they are done by trial and error.
That's the nature of AI right now until we understand exactly how the parameters of the neural networks work. And no one - I don't expect anyone to know that anytime soon. So until then, the major bottleneck that we have is a computer power, assuming that the computer power will increase one quarter of a million times, 250,000 times over six years. Within six years from now, you mentioned 10 years, I'll just talk about six years.
Continue reading here:
AI Cannot Be Slowed Down With Ramy Taraboulsi And Kirk Spano - Seeking Alpha