Welcome to Neurals guide to the glorious future of AI. What wonders will tomorrows machines be capable of? How do we get from Alexa and Siri to Rosie the Robot and R2D2? In this speculative science series well put our optimist hats on and try to answer those questions and more. Lets start with a big one: The Singularity.
The future realization of robot lifeforms is referred to by a plethora of terms sentience, artificial general intelligence (AGI), living machines, self-aware robots, and so forth but the one that seems most fitting is The Singularity.
Rather than debate semantics, were going to sweep all those little ways of saying human-level intelligence or better together and conflate them to mean: A machine capable of at least human-level reasoning, thought, memory, learning, and self-awareness.
Modern AI researchers and developers tend to gravitate towards the term AGI. Normally, wed agree because general intelligence is grounded in metrics we can understand to qualify, an AI would have to be able to do most stuff a human can.
But theres a razor-thin margin between as smart as and smarter than when it comes to hypothetical general intelligence and it seems likely a mind powered by super computers, quantum computers, or a vast network of cloud servers would have far greater sentient potential than our mushy organic ones. Thus, well err on the side of superintelligence for the purposes of this article.
Before we can even start to figure out what a superintelligent AI would be capable of, however, we need to determine how its going to emerge. Lets make some quick decisions for the purposes of discussion:
So how will our future metal buddies gain the spark of consciousness? Lets get super scientific here and crank out a listicle with five separate ways AI could gain human-level intelligence and awareness:
In this first scenario, if we predict even a modest year-over-year increase in computation and error-correction abilities, it seems entirely plausible that machine intelligence could be brute-forced into existence by a quantum computer running strong algorithms in just a couple centuries or so.
Basically, this means the incredibly potent combination of exponentially increasing power and self-replicating artificial intelligence could cook up a sort of digital, quantum, primordial soup for AI where we just toss in some parameters and let evolution take its place. Weve already entered the era of quantum neural networks, a quantum AGI doesnt seem all that far-fetched.
What if intelligence doesnt require power? Sure, our fleshy bodies need energy to continue being alive and computers need electricity to run. But perhaps intelligence can exist without explicit representation. In other words: what if intelligence and consciousness can be reduced to purely mathematical concepts that only when properly executed became apparent?
A researcher by the name of Daniel Buehrer seems to think this could be possible. They wrote a fascinating research paper proposing the creation of a new form of calculus that would, effectively, allow an intelligent master algorithm to emerge from its own code.
The master algorithm idea isnt new the legendary Pedro Domingos literally wrote the book on the concept but what Buehrers talking about is a different methodology. And a very cool one at that.
Heres Buehrers take on how this hypothetical self-perpetuating calculus could unfold into explicit consciousness:
Allowing machines to modify their own model of the world and themselves may create conscious machines, where the measure of consciousness may be taken to be the number of uses of feedback loops between a class calculuss model of the world and the results of what its robots actually caused to happen in the world.
They even go on to propose that such a consciousness would be capable of having little internal thought wars to determine which actions occurring in the machines minds eye should be effected into the physical world. The whole paper is pretty wild, you can read more here.
This ones pretty easy to wrap your head around (pun intended). Instead of a bunch of millionaire AI developers with billion-dollar big tech research labs figuring out how to create a new species of intelligent being out of computer code, we just figure out how to create a perfect artificial brain.
Easy right? The biggest upside here would be the potential for humans and machines to occupy the same spaces. This is clearly a recipe for augmented humans cyborgs. Perhaps we could become immortal by transferring our own consciousnesses into non-organic brains. But the bigger picture would be the ability to develop robots and AI in the true image of humans.
If we can figure out how to make a functional replica of the human brain, including the entire neural network housed within it, all wed need to do iskeep it running and shovel the right components and algorithms into it.
Maybe conscious machines are already here. Or maybe theyll quietly show up a year or a hundred years from now completely hidden in the background. Im talking about cloud consciousness and the idea that a self-replicating, learning AI created solely to optimize large systems could one day gain a form of sentience that would, qualitatively, indicate superintelligence but otherwise remain unnoticed by humans.
How could this happen? Imagine if Amazon Web Services or Google Search released a cutting-edge algorithm into their respective systems a few decades from now and it created its own self-propagating solution system that, through the sheer scope of its control, became self-aware. Wed have a ghost in the machine.
Since this self-organized AI system wouldnt have been designed to interface with humans or translate its interpretations of the world it exists in into something humans can understand, it stands to reason that it could live forever as a superintelligent, self-aware, digital entity without ever alerting us to its presence.
For all we know theres a living, sentient AI chilling out in the Gmail servers just gathering data on humans (note: there almost certainly isnt, but its a fun thought exercise).
Dont laugh. Of all the methods by which machines could hypothetically gain true intelligence, alien tech is the most likely to make it happen in our lifetimes.
Here we can make one of two assumptions: Aliens will either visit us sometime in the near future (perhaps to congratulate us on achieving quantum-based interstellar communication) or well discover some ancient alien technology once we put humans on Mars within the next few decades. These are the basicplots of Star Trek andthe Mass Effect video game series respectively.
Heres hoping that, no matter how The Singularity comes about, it ushers in a new age of prosperity for all intelligent beings.But just in case it doesnt work out so well, weve got something thatll help you prepare for the worst. Check out these articles in Neurals Beginners Guide to the AI Apocalypse series:
Published November 18, 2020 19:50 UTC
Excerpt from:
Neurals guide to the glorious future of AI: Heres how machines become sentient - The Next Web