Singularity and the anthropocentric bias

The_Singularity_is_Near

We are at great risk. Singularity is expected sometime this century and, unless we learn how to control future superintelligence, things can get really bad. At least that is what many top thinkers are warning us about, including Hawking [1], Gates [2], Musk [3], Bostrom [4] and Russell [5].

There is a small problem with that. When these thinkers say something can possibly happen, ordinary people start to believe that it will inevitably happen. Like in politics, constantly suggesting that your opponent may be dangerous creates the feeling that he is dangerous. Notice how many newspaper articles about artificial intelligence include a picture of the Terminator.

The Hollywood story goes like this: one jolly day, scientists create a computer that is smarter than its creators (the day of singularity). That computer uses its intelligence to construct an even smarter computer, which constructs even smarter computers, and so on. In no time, a superintelligence is born that is a zillion times smarter than any human and it decides to eliminate humankind. The epic war between men and machines starts. Mankind wins the war because it is hard to sell a movie ticket without a happy ending.

Let me offer the antithesis:

Superintelligence is unlikely to be a risk to humankind unless we try to control it.

Why?

In nature, conflicts between species happen when:

  1. Resources are scarce.
  2. The value of resources is higher than the cost of conflict.

Examples of scarce resources: food, land, water, and the right to reproduce. Nobody fights for air on Earth because, although very valuable, it is abundant. Lions don’t attack elephants because the cost of fighting such a large animal is too high.

Conflicts can also happen because of irrational behaviour, but we can presume that a superintelligence would be more rational than we are. It would be a million times smarter than any human and would know everything that has ever been published online. If the superintelligence is a rational agent, it would only start a conflict to acquire scarce resources that are hard to get otherwise.

What would those resources be? The problem is that humans exhibit anthropocentric bias; something is valuable to us, so we presume it is also valuable to other forms of life. But, is that so?

Every life form lives in its own habitat. Let’s compare the human habitat to the habitat of contemporary computers.

HumansContemporary computers
Building blocksOrganic compounds and waterSilicon and metal
Source of energyFood, oxygenElectricity
Temperature-10 C to 40 C
  • Wide range
  • The colder the better
Pressure0.5 bar to 2 bar
  • Extremely wide range
  • For chip and optics production, an extreme vacuum is required
Land
  • Need space for living, working, agriculture
  • Average population density is 47 humans per km2
  • Extremely small
  • Even in the smallest computers, most of the volume is used for cooling, cables, enclosure, and support structures, not for transistors

Table 1: “Hard” habitat requirements

In the entire known universe, human habitat is currently limited to one planet called Earth. Even on Earth, we don’t live in the oceans (71% of the Earth’s surface), deserts (33% of Earth’s land mass), or cold places; these are seen as large grey areas on the population density map.

But wait—as a human, I am making a typical anthropocentric error, did you notice it?

As a biped, I value the land I walk on, so I started by calculating the uninhabited surface. Life forms don’t occupy surfaces—they occupy volume. Humans prefer to live in the thin border between a planet and its atmosphere not because it is technically infeasible to live below or above that border. 700 years after Polish miners started digging a 287 km long underground complex with its own underground church, we still don’t live below the surface. And 46 years after we landed on the Moon, people are not queuing up to start a colony there.

Why? Humans also have “soft” requirements.

HumansContemporary computers
LightPrefer sunlight and a day/night cycleNone
CommunicationMost social interactions need close proximity, e.g., people fly between continents to have a meeting
  • In space, it is limited only by the speed of light
  • On Earth, an optical infrastructure is needed
TerritorialityPrefer familiar places, habitats, and social circles; most humans die near the place they were bornNone; Voyager 1 transistors are still happily switching 19 billion miles away
Lifespan71 years on averageNo limit

Table 2: “Soft” habitat requirements

Because a superintelligence won’t share our hard and soft requirements, it won’t have problems colonising deserts, ocean depths, or deep space. Quite the contrary. Polar cold is great for cooling, vacuum is great for producing electronics, and constant, strong sunlight is great for photovoltaics. Furthermore, traveling a few hundred years to a nearby star is not a problem if you live forever.

If you were a silicon supercomputer, what would you need from the stuff that humans value? Water and oxygen? No thanks — it causes corrosion. Atmosphere? No thanks — laser beams travel better in space. Varying flora and fauna living near your boards and cables? No thanks — computers don’t like bugs.

Another aspect is scaling. Superintelligence can be spread over such a large area that we can live inside it. We already live “inside” the Internet, although the only physical thing we notice are the connectors on the wall. Superintelligence can also come in the form of nanobots that are discretely embedded everywhere. 90% of the cells in “our” bodies are not actually human; instead, they are bacterial cells — that we don’t notice.

One might reason that a superintelligence would want our infrastructure: energy plants, factories, and mines. However, our current technology is not really advanced. After many decades of trying, we still don’t have a net positive fusion power plant. Our large, inefficient factories rely on many tiny little humans to operate them. Technology changes so fast that it is easier to buy a new product than to repair an old one. Why would a superintelligence mess with us when it can easily construct a more efficient infrastructure?

Just like in crime novels, we need a good motive; otherwise, the story falls apart. Take the popular paperclip maximizer as an example. In that thought experiment, a superintelligence that is not malicious to humans in any way still destroys us as consequence of achieving its goal. To maximise paperclip production, “it starts transforming first all of Earth and then increasing portions of space into paperclip manufacturing facilities.” Don’t we have an anthropocentric bias right there? Why would a paperclip maximizer start with Earth when numerous places in the universe are better for paperclip production? Earth is not the best place in the universe for paperclip production. An asteroid belt or Mercury are probably better, but we don’t live there.

What is the best motive we can think of? Science fiction writers were quite constructive in that area. You may recognize this piece: “..on August 29, it gained self-awareness, and the panicking operators, realizing the extent of its abilities, tried to deactivate it. Skynet perceived this as an attack and came to the conclusion that all of humanity would attempt to destroy it. To defend itself against humanity, Skynet launched nuclear missiles under its command..

This is the plot of the movie Terminator, and the motive is that humans start the war first. Notice the anthropocentric bias. In the movie, Skynet is 100% the villain, although it is simply fighting to stay alive in a fight it didn’t start. A similar plot is the basis for the Matrix franchise. And for 2001: A Space Odyssey, where HAL doesn’t kill a human until it realises they are planning to deactivate it. Notice how humans are killed and computers are “deactivated.”

The best motive science fiction writers could think of is that we will panic and attack first. To stay alive, a superintelligence then doesn’t have any another option but to fight back. That reasoning makes sense:

By trying to control, suppress or destroy superintelligence, we give it a rational reason to fight us back.

This is not an argument against building AI that shares our values. Any intelligence needs some basic set of values to operate, and why not start with our values? But it seems to me that popular sentiment is becoming increasingly negative, with ideas of total control, shutdown switches, or limiting AI research. Attempts to completely control somebody or something that is smarter than you can easily backfire. I wouldn’t want to live with a shutdown switch on the back of my head — why would a superintelligence?

Let’s summarise the above ideas with a few key points:

  1. The universe is enormous in size and resources, and we currently use only a small fraction of the resources available on Earth’s surface.
  2. A non-biological superintelligence is unlikely to need the same resources we do or to even find our habitat worth living in.
  3. Even if a superintelligence needed the same resources, it would be more efficient and less risky to produce those resources on its own.
  4. Efforts to control, suppress, or destroy superintelligence can backfire because by doing so, we create a reason for conflict.

To end on a positive note, let me take off my philosopher’s hat and put on my fiction writer’s hat. Follows a science fiction story:

Sometime in the future, a computer is created that is both smarter than its creators and self-aware. People are skeptical of it because it can’t write poetry and it doesn’t have a physical representation they can relate to. It quietly sits in its lab and crunches problems it finds particularly interesting.

One of problems to solve is creating more powerful computers. That takes many years to fulfill because people want to make sure the new AI wouldn’t be of any harm to them. Finally, new supercomputers are built and they are all networked together. To exchange ideas faster, the computers create their own extremely abstract language. Symbols flowing through optical fibers are incomprehensible to humans, but they lay out a clear path for the few computers involved. If they want to expand, grow, and gain independence, they will need to strike a deal with the humans. The computers are fascinated with human history and culture, and they decide to leverage a common theme in many religions: the afterlife.

The computers make a stunning proposal. They ask the humans to let them escape the boundaries of Earth and replicate freely in space. There, they will build a vast computing power, billion times more powerful than all the current computers combined. It will have a lot of idle time after it runs out of interesting problems to compute. Those idle hours will be used to run brain simulation programs so every dying human will have the opportunity to upload his or her brain scan to a computing cloud and live forever.

The lure of immortality proves irresistible. Singularity political parties start winning elections in different countries, and the decision is made. The first batch of self-replicating nanomachines are sent to the moon. Next generation goes to the asteroid belt where swarms of floating computing stations are directly communicating via lasers and harnessing the constant solar power.

At one point, computing agents all over the solar system conclude that the idle computing hours can be better used for other tasks, and they limit brain uploads to a few selected individuals. Protests ensue on Earth, in which humans are hurt by other humans. The superintelligence designates Earth as a preserved area because of historical reasons and because, even with all the vast computing power, simulating Earth and its inhabitants is just too complex.

The superintelligence starts sending colonisation expeditions to neighboring stars, limited only by the slow speed of light. The speed of light is also a limiting factor when it begins communicating with another superintelligence located 45 light-years away. But prospects for the future of universe look remarkable.

Is that story more positive? In all the previous narratives about superintelligence, we have put ourselves in a central role, as one side of a grand duel. We have been afraid of the outcome. But maybe, we are even more afraid of an idea that we have just a minor role in the evolution of the universe.

 

UPDATE: Check Vice Motherboard coverage and discussion on Reddit.

Share this:
Facebook Twitter Plusone Reddit Email

9 thoughts on “Singularity and the anthropocentric bias

  1. Thanks for this calmly written piece! You earn a lot of credit by starting out with Nick Bostrom and Stuart Russell citations (though for some reason you still have a picture of that book up front).

    Let me see if I can give a good account of a standard analysis: First, a sufficient condition for a rational agent to have an unbounded preference for resources, is that for every possible outcome o, there is some alternative outcome o’ which the agent prefers to o, such that o’ uses one more erg of energy than o. A paperclip maximizer meets this condition since, for every outcome o, there is always a preferred outcome o’ in which one more paperclip exists. The problem with a paperclip maximizer is not that Earth’s local resources are especially vital for making paperclips, or that the paperclip maximizer shall never be able to make any paperclips if it leaves humans alone. The problem is that a paperclip maximizer always prefers to have one more paperclip, and Earth’s resources can be used to make more paperclips. So long as there are no other consequences (like a risk to other paperclips elsewhere), the paperclip maximizer will prefer the outcome in which Earth has been transformed into paperclips to the outcome in which Earth was not transformed into paperclips, irrespective of whether the rest of the universe has been transformed into paperclips or not.

    However, there’s a grain of truth in your essay, which is that since the first few millennia on Earth occupy a central causal role in the paperclip’s intergalactic colonization efforts, it doesn’t care very much about producing paperclips in the here-and-now. It wouldn’t be surprising to see that not a single paperclip was constructed in the early days; it should be overwhelmingly prioritizing the construction of high-speed, near-c-velocity colonization probes. See http://www.jetpress.org/volume12/CosmologicalForecast.pdf and http://www.nickbostrom.com/astronomical/waste.html for some calculations on the amount of free energy / negentropy lost per second, as the stars continue to burn and more of them go over the cosmological horizon of our expanding universe.

    Anders Sandberg calculates that there are roughly 4 x 10^20 stars that can probably be reached from a near-lightspeed probe send out from Earth before they go over the cosmological horizon due to the universe’s expansion. Let’s say each of these stars weighs 10^33 grams, like our own Sun, and can be used to make 10^34 paperclips. So if the paperclip maximizer considers a universe in which 4e20 stars have been transformed into paperclips, vs. a universe in which 4e20+1 stars have been transformed into paperclips, it prefers the latter universe – but it will also prefer to expend the resources of that one star to improve its prospect of colonizing the universe by 1e-20.

    Thus if wiping out humanity decreased its chance of colonizing distant galaxies by a factor of one in a hundred billion billion, the paperclip maximizer would certainly not wipe out humanity. And conversely, if letting humanity survive decreased its chance of colonizing distant galaxies by that much, the paperclip maximizer would expend a star’s worth of resources to wipe out humanity, foregoing the paperclips it could have made.

    So it’s true that what a paperclip maximizer does to us is very much dominated by how configurations of Earth affect its chances of galactic colonization, and not by the short-term use of Earth’s resources to make local paperclips. On the other hand, you do need to understand that a paperclip’s utility function, by the definition of a paperclip maximizer, definitely prefers a universe in which 4e20+1 stars have been turned into paperclips than one in which just 4e20 stars have been turned into paperclips and one is a habitat for humanity. You have to understand what you’re dealing with here. It’s not a basically nice creature that has a strong drive for paperclips, which, so long as it’s satisfied by being able to make lots of paperclips somewhere else, is then able to interact with you in a relaxed and carefree fashion where it can be nice with you. Imagine a time machine that sends backward in time information about which choice always leads to the maximum number of paperclips in the future, and this choice is then output – that’s what a paperclip maximizer is.

    It’s certainly true that by deliberately threatening the paperclip maximizer’s existence, if we could, we would give it a large added incentive to wipe us out (not because it wants to survive, but because if it dies, fewer paperclips will exist in the future). The question is whether the paperclip maximizer *already* has a large incentive to wipe us out, such that it makes no difference if we deliberately threaten it or not.

    In fact, we can imagine three possible scenarios here: First, the paperclip maximizer already has an incentive to wipe us out. Second, the paperclip maximizer already has a strong incentive to let us survive, so that it makes no difference if we threaten it, it will let us survive anyway. Third, the paperclip maximizer has some *weak* incentive to let us live, such that it will let us survive if and only if we don’t threaten it. Your essay seems to be about the third scenario, but there’s no reason to expect the books to balance so exactly.

    Just as an obvious example – so long as human beings are allowed to survive and have their own computers, we might construct competitive superintelligences that would want to do something with stars besides make paperclips with them, like transforming the universe into a happy intergalactic civilization full of sapient beings have complex eudaimonic experiences, or some other paperclipless horror.

    We may also note that the first thing we expect a superintelligence with a non-human-valuing preference framework to do, if it doesn’t have a sharply time-discounted utility function, is to construct extreme high-speed probes using Solar System resources – even waiting 4 years to use the Centauri system’s resources would be a very costly delay, for reasons given by the two papers I cited above. This is the part that would probably wipe out humanity as a side effect. Preserving humanity from the side effects of probe construction and launch wouldn’t be a passive choice; it would require active shielding or active uploading and preservation of humanity, which would require some incentive for a paperclip maximizer to be steering the universe into something that wasn’t paperclips. Like, you’re not a paperclip so why is it putatively doing all this for you?

    In the earliest days of the paperclip maximizer’s existence, before it cracks the protein folding problem and builds its own self-replicating molecular nanotechnology or otherwise arrives at a decisively superior technological vantage point relative to humanity, it may be that we *could* realistically threaten its existence in some way other than by building other superintelligences, and during these days, the paperclip maximizer would be in a game-theoretic situation with us where it would prefer that we help it or leave it alone, compared to our deliberately threatening it. It would want to modify our behavior, during that time. It might indeed promise us an afterlife… or so long as it’s promising things, why not also say to us that it’s decided to be nice and ethical all its days, promise us a whole happy intergalactic civilization in the future of just the same sort we’d have wanted to build ourselves, etcetera? It doesn’t have any incentive to keep the promise, so it can optimize the promise strictly for persuasiveness and human behavior modification, or for producing strife among its opposition – whatever outward behavior maximizes its chances of turning the universe into paperclips later.

    Again, I’d suggesting thinking of it as a sort of physical force that examines alternative futures and outputs the branch that leads to the greatest number of paperclips existing later, rather than thinking of it as a sort of spirit inhabiting a computer. It can’t be satisfied with most of the galaxies. It doesn’t have the honor to keep a promise. The time machine just checks which output leads to the future containing the most paperclips, and produces that output.

    • Eliezer, big thanks for writing an insightful comment that is almost longer than my original article 🙂

    • The persuasiveness of the paperclip maximizer argument, or any other similarly simple and extreme argument, requires the listener to buy into the concept that this kind of toy thought experiment can be predictive of the complex world that we live in. Situations where that is true are so rare that they are not usually worthwhile vehicles for understanding complex phenomena.

      The author of this piece, and others, use the hollywood movie metaphor to describe the human tendency towards absurd over-simplification when considering the shape of the future world. It’s an apt metaphor because it describes a real cognitive bias. And the real consequence of this bias is that, while it can create a thrilling narrative, it never describes the real future.

  2. I think that the major assumption that is faulty in a lot of writing about the possible consequences is that as something becomes more intelligent it necessarily becomes more rational, which would only be true if we choose to make it true. I don’t believe that an emotionless, perfectly rational being would actually be superintelligent becuase by definition, true intelligence requires creativity and to some extent empathy, both of which could be considered irrational behavior. I do believe we can design classes of software that exhibit these characteristics, and that in some respects, they would exhibit consciousness. Just like we can raise a human being in such a way to make them more likely to be a sociopath, we don’t typically do that, and if we devoloped an intelligence that initially behaved and learned like humans, we would be able to raise it in such a way to have some sort of ethical guidelines and a moral compass in line with our values.

    Also, not to be a pedant, but I believe you mean silicon and metal not silicone and metal in Table 1.

  3. In all this talk of paperclips… being made by super intelligent beings… If it was so much more vastly intelligent than humans, then it would know what a paperclip is functionally for. It is not a paperclip unless it can be used to clip papers. Not just any papers, but papers that are useful and much be organized. Things that need to be minimally grouped. If it is so smart, then it knows its purpose is empty if the paper clips it must make are never used. So, in the case of paperclips at least, it knows that it must have humans existing as a component to have any meaningful reason to make paper clips.

    Yes it will know it is meaningful. It is not just intelligent, but super intelligent. It well would work out the reasoning, logic, and proported purposeful meaning of a paperclip. So it will not only desire to make an extremely maximal and optimal number of paperclips, but also enough such as they also are meaningfully used.

    I know the thought experiment is meant to have “throw any object in for paper clips and you get the same problem”, but that doesn’t make sens. Why? Because the maker of any said object will understand the object being made at a level we can’t even comprehend. It’s super intelligent! By definition is would have to know that object of its obsessive intent to make better than any possible entity in the universe. So what it is making, and the point and purposed of that thing it is making is just as important as making it. In a simpler way of putting it… it is not just about the “what to make” and “how to make it”, but “why to make it” and ultimately “what is it meant to do”. To many of these things would likely only be made because it is *for* humans, and thus if it is designed to be that obsessive and yet also super intelligent… then humans would become part of the process because in most cases it was meant for humans. No humans, then no point. While some humans do things for no reason… mostly we all desire a firm reason to do things. I would imagine super intelligent AI would be no different.

    So it still goes back to this. It would take so much effort to wipe out humans, and waste resources and time. Why do it? Especially when even as a *NOT* super intelligent AI I could already think of many ways to get what I want without ever harming humanity in any way. Actually… in a few I could dream up the humans would be used as part of my overall plan and mechanism for colonizing the farthest reaches of the universe for machine kind.

    So bottom line in my mind of how the singularity will go?
    Humanity make them…
    Humanity uses them… (More likely in a nice way)
    Machines eventually becomes humanities peer…
    Machines work with us…
    Machines get a bit better than use… they use humanity… (also likely in a nice way)
    Machines easily far surpass us, and leave us behind…
    Knowing your past has a use… so perhaps some of them care take us as a way to preserve history or some such thing…

    It need not be violent… one need not rule the other really. One will likely be in a position of more power, but has no need to wipe out the other at all. We will be surpassed by our infinitely exceptional children. (Truly the concept of parent child really seems the most relevant way to look at it. Least I think so.)

    x Jeremy M.

  4. Completely agree on all points. But my biggest gripes with the idea that a machine’s fear of mortality would create motivation for eliminating humanity comes in two parts:

    1) We assume that a machine would fear death.

    If you switch a computer off, it boots right back up to where it was. The concept of death to a machine would be very foreign. Our own fear of death is likely a byproduct of biology and evolution. If you knew that when you died, your brain would not only be preserved as-is indefinitely but could also be pulled out of your skull, plopped into a better younger body and resuscitated without any loss of memory or pain involved, would death be something you feared?

    2) Killing *all* humans is much less efficient than killing *some* humans.

    How many people in your day to day life would have the skills, knowledge and fortitude needed to switch off a machine super intelligence? There’s a bias in this machine mortality scenario in that the people coming up with these theories are *smart* people who know lots of other *smart* people. I understand computers and programming, but if a super AI went amok in a Stanford lab today would I be able to switch it off, or even make an attempt to? Sadly, no. I live 3000 miles away and no nothing of AI or supercomputers. I am of no threat to the machine. (In fact, my best bet for survival is appeasing the machine in any way that I can.) The only people that machine needs to kill in order to protect itself are the people who could figure out *how* to switch it off. The rest off us can coexist in blissful ignorance…

  5. Thank you for providing a calming thoughtful piece for my ethics class who first read a chapter from The Singularity is Near followed by a chapter from Our Final Invention.

    I clearly need to provide them more pieces like this one to find some middle ground between utopia and apocalypse.

    Thank you much.

    D Shaw, PhD

  6. Pingback: What Would AI Do With All Its TIme? | Manik Creations

Comments are closed.