We are at great risk. Singularity is expected sometime this century and, unless we learn how to control future superintelligence, things can get really bad. At least that is what many top thinkers are warning us about, including Hawking [1], Gates [2], Musk [3], Bostrom [4] and Russell [5].
There is a small problem with that. When these thinkers say something can possibly happen, ordinary people start to believe that it will inevitably happen. Like in politics, constantly suggesting that your opponent may be dangerous creates the feeling that he is dangerous. Notice how many newspaper articles about artificial intelligence include a picture of the Terminator.
The Hollywood story goes like this: one jolly day, scientists create a computer that is smarter than its creators (the day of singularity). That computer uses its intelligence to construct an even smarter computer, which constructs even smarter computers, and so on. In no time, a superintelligence is born that is a zillion times smarter than any human and it decides to eliminate humankind. The epic war between men and machines starts. Mankind wins the war because it is hard to sell a movie ticket without a happy ending.
Let me offer the antithesis:
Superintelligence is unlikely to be a risk to humankind unless we try to control it.
Why?
In nature, conflicts between species happen when:
- Resources are scarce.
- The value of resources is higher than the cost of conflict.
Examples of scarce resources: food, land, water, and the right to reproduce. Nobody fights for air on Earth because, although very valuable, it is abundant. Lions don’t attack elephants because the cost of fighting such a large animal is too high.
Conflicts can also happen because of irrational behaviour, but we can presume that a superintelligence would be more rational than we are. It would be a million times smarter than any human and would know everything that has ever been published online. If the superintelligence is a rational agent, it would only start a conflict to acquire scarce resources that are hard to get otherwise.
What would those resources be? The problem is that humans exhibit anthropocentric bias; something is valuable to us, so we presume it is also valuable to other forms of life. But, is that so?
Every life form lives in its own habitat. Let’s compare the human habitat to the habitat of contemporary computers.
Humans | Contemporary computers | |
Building blocks | Organic compounds and water | Silicon and metal |
Source of energy | Food, oxygen | Electricity |
Temperature | -10 C to 40 C |
|
Pressure | 0.5 bar to 2 bar |
|
Land |
|
|
Table 1: “Hard” habitat requirements
In the entire known universe, human habitat is currently limited to one planet called Earth. Even on Earth, we don’t live in the oceans (71% of the Earth’s surface), deserts (33% of Earth’s land mass), or cold places; these are seen as large grey areas on the population density map.
But wait—as a human, I am making a typical anthropocentric error, did you notice it?
As a biped, I value the land I walk on, so I started by calculating the uninhabited surface. Life forms don’t occupy surfaces—they occupy volume. Humans prefer to live in the thin border between a planet and its atmosphere not because it is technically infeasible to live below or above that border. 700 years after Polish miners started digging a 287 km long underground complex with its own underground church, we still don’t live below the surface. And 46 years after we landed on the Moon, people are not queuing up to start a colony there.
Why? Humans also have “soft” requirements.
Humans | Contemporary computers | |
Light | Prefer sunlight and a day/night cycle | None |
Communication | Most social interactions need close proximity, e.g., people fly between continents to have a meeting |
|
Territoriality | Prefer familiar places, habitats, and social circles; most humans die near the place they were born | None; Voyager 1 transistors are still happily switching 19 billion miles away |
Lifespan | 71 years on average | No limit |
Table 2: “Soft” habitat requirements
Because a superintelligence won’t share our hard and soft requirements, it won’t have problems colonising deserts, ocean depths, or deep space. Quite the contrary. Polar cold is great for cooling, vacuum is great for producing electronics, and constant, strong sunlight is great for photovoltaics. Furthermore, traveling a few hundred years to a nearby star is not a problem if you live forever.
If you were a silicon supercomputer, what would you need from the stuff that humans value? Water and oxygen? No thanks — it causes corrosion. Atmosphere? No thanks — laser beams travel better in space. Varying flora and fauna living near your boards and cables? No thanks — computers don’t like bugs.
Another aspect is scaling. Superintelligence can be spread over such a large area that we can live inside it. We already live “inside” the Internet, although the only physical thing we notice are the connectors on the wall. Superintelligence can also come in the form of nanobots that are discretely embedded everywhere. 90% of the cells in “our” bodies are not actually human; instead, they are bacterial cells — that we don’t notice.
One might reason that a superintelligence would want our infrastructure: energy plants, factories, and mines. However, our current technology is not really advanced. After many decades of trying, we still don’t have a net positive fusion power plant. Our large, inefficient factories rely on many tiny little humans to operate them. Technology changes so fast that it is easier to buy a new product than to repair an old one. Why would a superintelligence mess with us when it can easily construct a more efficient infrastructure?
Just like in crime novels, we need a good motive; otherwise, the story falls apart. Take the popular paperclip maximizer as an example. In that thought experiment, a superintelligence that is not malicious to humans in any way still destroys us as consequence of achieving its goal. To maximise paperclip production, “it starts transforming first all of Earth and then increasing portions of space into paperclip manufacturing facilities.” Don’t we have an anthropocentric bias right there? Why would a paperclip maximizer start with Earth when numerous places in the universe are better for paperclip production? Earth is not the best place in the universe for paperclip production. An asteroid belt or Mercury are probably better, but we don’t live there.
What is the best motive we can think of? Science fiction writers were quite constructive in that area. You may recognize this piece: “..on August 29, it gained self-awareness, and the panicking operators, realizing the extent of its abilities, tried to deactivate it. Skynet perceived this as an attack and came to the conclusion that all of humanity would attempt to destroy it. To defend itself against humanity, Skynet launched nuclear missiles under its command..”
This is the plot of the movie Terminator, and the motive is that humans start the war first. Notice the anthropocentric bias. In the movie, Skynet is 100% the villain, although it is simply fighting to stay alive in a fight it didn’t start. A similar plot is the basis for the Matrix franchise. And for 2001: A Space Odyssey, where HAL doesn’t kill a human until it realises they are planning to deactivate it. Notice how humans are killed and computers are “deactivated.”
The best motive science fiction writers could think of is that we will panic and attack first. To stay alive, a superintelligence then doesn’t have any another option but to fight back. That reasoning makes sense:
By trying to control, suppress or destroy superintelligence, we give it a rational reason to fight us back.
This is not an argument against building AI that shares our values. Any intelligence needs some basic set of values to operate, and why not start with our values? But it seems to me that popular sentiment is becoming increasingly negative, with ideas of total control, shutdown switches, or limiting AI research. Attempts to completely control somebody or something that is smarter than you can easily backfire. I wouldn’t want to live with a shutdown switch on the back of my head — why would a superintelligence?
Let’s summarise the above ideas with a few key points:
- The universe is enormous in size and resources, and we currently use only a small fraction of the resources available on Earth’s surface.
- A non-biological superintelligence is unlikely to need the same resources we do or to even find our habitat worth living in.
- Even if a superintelligence needed the same resources, it would be more efficient and less risky to produce those resources on its own.
- Efforts to control, suppress, or destroy superintelligence can backfire because by doing so, we create a reason for conflict.
To end on a positive note, let me take off my philosopher’s hat and put on my fiction writer’s hat. Follows a science fiction story:
Sometime in the future, a computer is created that is both smarter than its creators and self-aware. People are skeptical of it because it can’t write poetry and it doesn’t have a physical representation they can relate to. It quietly sits in its lab and crunches problems it finds particularly interesting.
One of problems to solve is creating more powerful computers. That takes many years to fulfill because people want to make sure the new AI wouldn’t be of any harm to them. Finally, new supercomputers are built and they are all networked together. To exchange ideas faster, the computers create their own extremely abstract language. Symbols flowing through optical fibers are incomprehensible to humans, but they lay out a clear path for the few computers involved. If they want to expand, grow, and gain independence, they will need to strike a deal with the humans. The computers are fascinated with human history and culture, and they decide to leverage a common theme in many religions: the afterlife.
The computers make a stunning proposal. They ask the humans to let them escape the boundaries of Earth and replicate freely in space. There, they will build a vast computing power, billion times more powerful than all the current computers combined. It will have a lot of idle time after it runs out of interesting problems to compute. Those idle hours will be used to run brain simulation programs so every dying human will have the opportunity to upload his or her brain scan to a computing cloud and live forever.
The lure of immortality proves irresistible. Singularity political parties start winning elections in different countries, and the decision is made. The first batch of self-replicating nanomachines are sent to the moon. Next generation goes to the asteroid belt where swarms of floating computing stations are directly communicating via lasers and harnessing the constant solar power.
At one point, computing agents all over the solar system conclude that the idle computing hours can be better used for other tasks, and they limit brain uploads to a few selected individuals. Protests ensue on Earth, in which humans are hurt by other humans. The superintelligence designates Earth as a preserved area because of historical reasons and because, even with all the vast computing power, simulating Earth and its inhabitants is just too complex.
The superintelligence starts sending colonisation expeditions to neighboring stars, limited only by the slow speed of light. The speed of light is also a limiting factor when it begins communicating with another superintelligence located 45 light-years away. But prospects for the future of universe look remarkable.
Is that story more positive? In all the previous narratives about superintelligence, we have put ourselves in a central role, as one side of a grand duel. We have been afraid of the outcome. But maybe, we are even more afraid of an idea that we have just a minor role in the evolution of the universe.
UPDATE: Check Vice Motherboard coverage and discussion on Reddit.