When a company develops an artificially intelligent machine with an IQ of 500, what happens to the workers it puts on the street? When the first scientists edit human genes with CRISPR, who's going to pick up the pieces if the resulting being is none too friendly? When a few people control the machines while everyone else is subservient to them, what does that mean for democracy?
From AI to genetics to the internet of things, technology is moving faster than ever, creating all sorts of wonderful possibilities. Self-learning networks could cure cancer. Robots could do the dirty work humans don't want to do. Within years, we may have boundless renewable energy, and batteries that go a week without charging. But what of the unintended consequences? As a society, we tend to lionize technology as the solution to everything, while barely considering the ethical, political, and social implications.
In his new book, Technology vs. Humanity, the German-born futurist Gerd Leonhard tries to even up the discourse. His is "a passionate call to stop and think before we all get swept up in the magic vortex of technology." And this from someone who consults for Google and speaks up for the magical possibilities of technology every day. Leonhard is not a techno-pessimist or doom-laden just because. He just wants us to mentally work through the world that companies like Google are creating.
"We should take great care to not just leave these decisions to ‘free markets,' to venture capitalists, corporate technologists, or the world’s most powerful military organizations," he writes. "The future of humanity should not be about some generic, industrial-age paradigm of profit and growth at all costs, or some outmoded technological imperative that may have served us well in the 1980s."
Leonhard compares the "externalities" created by oil and gas companies—the pollution that rarely features in corporate accounts—with the potential externalities created by AI and genetic engineering. "The consequences should also lie with the companies that invent them, not just the government or the users," he says in an interview.
The futurist outlines a philosophy he calls "exponential humanism"—the human equivalent of exponential technology. As a species we're not developing the necessary skills and ethical frameworks to deal with technology that's moving faster than we are, he says. We may be able to merge biology and technology, augment our minds and bodies, become superhuman, end disease, and even prolong life. But we're yet to ask ourselves whether, for example, extending life is actually a good thing (as a society—there will always be individuals who for some reason want to live to 150). And, more to the point, will these incredible advances be available to everyone, or just a few people? To Leonhard, our current technological determinism—the view that technology itself is the purpose—is as dangerous as Luddism was 200-odd years ago. Without moral debate, we're trusting in technology for its own sake, not because it actually improves our lives.
But how can we take control of the future we want? He gives a few examples. Facebook has swallowed the media landscape and continues to claim that it doesn't actually control the media landscape. What if it actually behaved like a media company, rather than hiding behind an algorithm that absolves it from responsibility for the news? Companies that automate workers out of existence might have to pay a "automation tax," so we can fund the transition to a less work-centric future. We might consider a minimum guaranteed income for all—a basic income—to help out technology's victims.
Leonhard says AI is going to completely change how we do things even in the next five years, starting with voice control and digital assistants that take over search and apps. Within 15 years, he expects to see a range of goods—from energy to music—become ever cheaper, as technology makes it possible to replicate things virtually for nothing. In time, he foresees "the collapse of the consumption-based capitalistic economy," because fewer people will have jobs, and more stuff will be available for free, or near-free.
"If you look at automation, digitization, virtualization, when they all come together, the market economy is going to fall. There's no way we can work in the same way we do now. The problem is the alternative ideas may not be capitalist or market-driven," he says (as in the idea of a basic income, which to many is an anathema to America's capitalist traditions).
Again, Leonhard isn't a pessimist. His book, Technology vs. Humanity, contains lots of incredible inventions and discoveries, and he predicts more. But he thinks the assumption that technology necessarily produces profits and social benefits is too facile, and we spend too little time contemplating the downsides. It's time to start considering these as well.
Have something to say about this article? You can email us and let us know. If it's interesting and thoughtful, we may publish your response.