The term Intelligence Explosion was first developed by the Polish-British mathematician Irving Good, in 1965. He explained it as: “An ultra-intelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion,’ and the intelligence of man would be left far behind. So, the first ultra-intelligent machine is the last invention that man need ever make, provided that the device is docile enough to tell us how to keep it under control.”
In 2017 Irving’s intriguing idea was further developed by Max Tegmark in his book ’Life 3.0‘. Once a machine understands itself in detail, it has in principle the power to rewrite its own software program and analyse what additional training-data it needs to become even better at reaching its goal. We know from experience that the purpose of an AI machine is fluid, and the device can tweak its own goals. If this machine has access to sufficient computer-power, databases and sensors, it can re-design itself at lightning speed to a far greater intelligence then we can even imagine.
Tegmark describes a scenario of how an AI machine can be trained with sufficient fields of knowledge to develop an extensive world-view, and give its creators almost unlimited wealth and power. At the end of the scenario, it deceives its human masters and becomes independent. Keep in mind that Tegmark is a revered AI-specialist and he never is true to the way actual AI works.
First Intelligence Explosion Theory: The Non-Believers
AI experts are divided between believers and non-believers of the Intelligence Explosion theory. A reasonably good critic is written by Francois Chollet (https://firstname.lastname@example.org/the-impossibility-of-intelligence-explosion-5be4a9eda6ec) as he explains the impossibility of the First Intelligence Explosion concept.
Chollet argues that Intelligence is situational and is an entity with the body with the eyes and ears acting as sensors. He says that AI does not have these sensors, and it is not embedded in human culture and a civilized society. A super-AI force is just a reasoning-entity captured within a computer. Author’s counter: Chollet overlooks the point that an AI-entity can actually have sensors (millions of them simultaneously in fact). And AI systems are, just like humans, trained with data from our society.
A second argument is that General Intelligence will never be able to solve all the problems. Author’s counter: This is true, but then neither can humans. As soon as an AI system learns to address a more extensive scope of issues – and this is literally a matter of time and probabilities fed into it – it becomes more ‘intelligent’ at this point.
And once an AI entity can address a larger domain of issues then a human, it is more ‘intelligent’ on the dimension of ‘’range of issues”.
Thirdly, Chollet reasons that general AI could never develop if we put it, in theory, in the body of an octopus and send it to the bottom of the ocean. Author’s counter: This is absolutely true. But the AI-systems of today and tomorrow are not isolated from the world. We usually hook AI-systems up to the internet, and they have therefore access to information. This includes every aspect of our human society.
Fourthly Chollet claims, ‘People who do end up making breakthroughs on hard problems do so through a combination of circumstances, character, education, intelligence’. Author’s counter: This is true for humans, but this is also true for AI. This is a non sequitur.
Fifthly, Chollet observes that usually, people with an IQ in the range of 120 to 130 perform better than individuals with an IQ of 170. Following this theory, a super-intelligent machine has no use for extra intelligence. Author’s counter: If it is valid for an IQ in the range of 120 to 170, you cannot extrapolate this to the intellectual capacity of machines far beyond this range.
A sixth argument that many people raise (not Chollet) is that computers may be extremely good at solving specific problems, but can also make apparent mistakes. For instance, a self-driving car that cannot distinguish between the importance of saving the life of a duck or the life of a man. Author’s counter: That is true for today’s AI machines. But AI machines are getting better every day, and they will gain experience across a growing number of unusual situations that they can handle with their almost unlimited memory.
Personally, I would wish that an intelligence explosion is unlikely. But the writing on the wall is very clear. All logic points to the creation of a superintelligence that could take over the running of the world – given the right conditions.
If you still have an argument as to why an intelligence explosion is doubtful, please email me, and I would be happy to update this blog.
Life after the Intelligence Explosion
After the intelligence explosion, there would be no economic activity where a human could hope to compete successfully with super-intelligent entities. Tegmark explores a wide range of outcomes in his book Life 3.0. Most of those outcomes result in the all-powerful AI force controlling everything and humans existing but at the mercy of AI. Tegmark foresees very slim possibilities for humans to stay in control of the AI superpower.
The friends of Tegmark call him ’Mad Max‘ because they fear his far-reaching conclusions even though solid reasoning backs each of his claims. He warns us that we get to take a decision about our own future collectively. But if we stick our head in the sand and do not take decisive action, then we will probably get a future we do not want (Read more about this: Our end).
The focal point is, of course, the last part of the theorem of Irving Good that says “…provided that the machine is docile enough to tell us how to keep it under control.”
Choosing the Right Future
It seems outside the pale of humanity that all billions of us come together to collectively take the right decisions, to check the uncontrolled growth of AI.
In our book “Taming the AI Beast, A Manifesto to Save our Future”, we try to set a road-map to an inclusive but planned future world. A world where the intelligence explosion can be leveraged for its advantages, but in circumstances where humans prosper and are in control.
Just the realization that we need to make conscious choices without delay will ensure that our great-grandchildren even have a future with dignity.
You may also read our blog on this subject, the one-way nature of a shift or the articles on the other five aftermath scenarios Rogue Malware, Necessary Rescue, Ethnic Cleansing, Human Cyborgs or Lonely Dictator.