2013-03-06

Co.Exist

Are Humans Headed For Extermination By Robots?

Once we build an artificial intelligence smarter than us, will it decide that we’re no longer necessary?

Before you get too comfortable in your chair, consider this: 99% of species that have ever walked, slithered, flown, or swum on this Earth have gone extinct, dead, dodo—including "five tool-using hominids" that share some of our advantages (and presumably thought they were unstoppable as well). Statistically speaking, us humans are quite likely to snuff it. A massive rock might fall from outer space, as it did to take out the dinosaurs. Or, there might be some kind of unimaginable volcanic eruption that covers everything in lava (as happened to the "large crurotarsans"). We don’t know exactly what the cause might be. But we do know that history is full of civilizations emerging and disappearing. It’s all there in the fossils.

Nick Bostrom who heads Oxford University’s Future of Humanity Institute thinks about existential risk for a living: what might happen, and how we might we cope. His biggest worries, in fact, aren’t asteroids, or volcanoes, or even nuclear weapons, or biological killers. It’s artificial intelligence—the technology that’s a few imaginative leaps from what we have now. The problem, in short, is that us humans aren’t very bright. Our brains are good at some functions, but inept at others. Some future creation—a sort of super-super-super-super computer, as yet undreamt of—could eventually outwit us.

As Ross Andersen puts it in a long article about the Institute’s work:

The average human brain can juggle seven discrete chunks of information simultaneously; geniuses can sometimes manage nine. Either figure is extraordinary relative to the rest of the animal kingdom, but completely arbitrary as a hard cap on the complexity of thought. If we could sift through 90 concepts at once, or recall trillions of bits of data on command, we could access a whole new order of mental landscapes. It doesn’t look like the brain can be made to handle that kind of cognitive workload, but it might be able to build a machine that could.

What makes AI particularly dangerous, Andersen says, is its lack of feeling:

To understand why an AI might be dangerous, you have to avoid anthropomorphising it. When you ask yourself what it might do in a particular situation, you can’t answer by proxy. You can’t picture a super-smart version of yourself floating above the situation. Human cognition is only one species of intelligence, one with built-in impulses like empathy that colour the way we see the world, and limit what we are willing to do to accomplish our goals. But these biochemical impulses aren’t essential components of intelligence. They’re incidental software applications, installed by aeons of evolution and culture. Bostrom told me that it’s best to think of an AI as a primordial force of nature, like a star system or a hurricane—something strong, but indifferent.

If Earth is taken over by AI-driven robots, humans will have to develop "star-hopping technology" to colonize another part of the universe. Indeed, that may be what separates species that make it from the ones that die. But even that technology, how ever distant, may not be enough to save us. Researchers at the Institute say there might be "filters," or cosmic blockages, stopping us from living in other galaxies.

Bostrom tells Andersen that he hopes that the Curiosity Rover won’t find evidence of previous civilizations on Mars. That would be a bad "omen" about our future:

It would give us reason to suspect that nature is very good at knitting atoms into complex animal life, but very bad at nurturing star-hopping civilisations. It would make it less likely that humans have already slipped through the trap whose jaws keep our skies lifeless.

Head over to Aeon’s web site for the full article. (And, bear in mind, time may be short.)

Add New Comment

0 Comments