×

In case the Lifeboat Foundation can’t prevent an attack by self-replicating nanobots, they’ve also designed this handy space colony.

2011-11-29

Co.Exist

How Stephen Wolfram Is Preparing For The Singularity

Computing and mathematics legend Stephen Wolfram is worried about bigger problems than climate change or overpopulation. He just joined the Lifeboat Foundation, a think tank devoted to ways of protecting humanity from deadly nanoweapons and rogue artificial intelligences.

Mathematics and computing legend Stephen Wolfram (of Wolfram Alpha) is betting on the singularity. This week, the Lifeboat Foundation--a thinktank devoted to helping humanity survive existential risks as it “moves towards the Singularity”--announced that Wolfram was joining the organization’s advisory board. Wolfram will help the organization as they work toward solutions to future dangers like protecting the world from evil robots or self-replicating miniature weapons

The Singularity is a concept, popularized by Ray Kurzweil, that posits human nature will be fundamentally transformed by technology sometime in the not-too-distant future. In books such as The Singularity is Near, and The Age of Spiritual Machines, Kurzweil argues that man and machine will ultimately become one. While many experts lampoon the idea of Singularity ever happening, the concept has been massively influential in both science and popular culture.

Wolfram has his own ideas about humanity and the future of technology. In a recent lecture at the 92nd Street Y’s Singularity Summit, Wolfram argued that the universe is best understood as a computer program and that technological advances will fundamentally alter human nature in unimaginable ways.

Lifeboat’s dedication to way-out there solutions for (arguably unlikely) existential risks has resulted in several unique projects. These projects range from conventional ideas that wouldn’t be out of place at government agencies to truly unique solutions for problems few have even imagined.

One program, the nanoshield, focuses on protecting humanity from self-replicating and miniaturized weapons. Futurists believe that military services worldwide may deploy “ecophages”--tiny weapons whose only goal is to replicate and attack enemy soldiers and their resources--sometime in the future. Specialists at the Lifeboat Foundation are studying the potential raw material that could be used to create ecophages; they are also studying contingency plans for everything from detecting ecophages to defenses that could be used to fight nanoweapon attacks. Radiation and sonic defenses against nanoweapons are considered the best bets.

Other experts at Lifeboat are focusing on one of Hollywood’s favorite tropes--protecting humanity from asteroid attacks. The asteroidshield program is working on precautionary methods that could someday save the planet. The obvious solution--destroying the asteroid, a al Armageddon--doesn’t work in real life; the destruction of an asteroid with missiles or other explosives would cause a catastrophic event thanks to asteroid debris. Instead, scientists working with Lifeboat propose that extensive asteroid detection measures be put into place and complemented with a post-detection program of altering asteroid orbits. As a test, Lifeboat urges that a space agency or military organization somewhere on the planet should attempt to “significantly alter the orbit of an asteroid in a controlled manner” by 2015.

However, Lifeboat’s arguably most interesting project aims to protect us from a threat we don’t even understand yet. AIshield is an ongoing program designed to come up with solutions to defend humanity from hostile artificial intelligences. According to Lifeboat, the organization believes that “Artificial General Intelligences” with abilities far exceeding humanity’s will come into existence over the next few decades. Scientists at Lifeboat believe some of these entities are likely to be either malevolent, to cause massive unintended negative consequences to the world, or at risk for “going rogue.” The result of Lifeboat’s research is some of the most readable (and movie-ready) scientific literature ever published:

Artificial intelligence could be embedded in robots used as soldiers for ill. Such killerbots could be ordered to be utterly ruthless, and built to cooperate with such orders, thereby competing against those they are ordered to fight. Just as deadly weapons of mass destruction like nuclear bombs and biological weapons are threats to all humanity, robotic soldiers could end up destroying their creators as well. Solutions are hard to come by, and so far have not been found.

In the event that Lifeboat’s solutions to some of these problems come to pass, there’s also a plan to create small space colonies (pictured above) called Ark 1, which they describe as the ultimate gated communities. Instead of colonizing another planet, why not use the space above the Earth, which would allow for self-selecting groups of people to live together as the planet is slowly destroyed.

Other projects Lifeboat’s involved in tackle more, well… everyday existential threats. One ongoing study researches ways to protect the internet from catastrophic attack, while another is focused on creating a science fiction-like global immune system against terrorist threats that would find and stop terrorists before they commit any violent acts. While the Lifeboat Foundation’s budget is considerably smaller than that of mega-thinktanks, they also have a decent amount of funding: The Lifeboat Fund has already received more than $500,000. More importantly for the scientific community, Lifeboat is one of the only outlets where accomplished thinkers can focus on projects that will matter 200 years from now.

The Lifeboat Foundation

Add New Comment

18 Comments

  • Guest

    "Wolfram argued that the universe is best understood as a computer program..."

    If seen this one, the answer is 42.

  • JohnHunt

    I think that it is unlikely that Lifeboat will be able to anticipate and with 100% success protect us against all future existential risks.

    Their time and money would be better spent in two related ways.  First is to find ways to slow down the development of technologies with existential risk.  I think that the best way of doing this is to conduct a "contained demonstration" of an existential or near-existential event.  Such an event would generate the political will necessary to increase their budget by orders of magnitude.  Such contained demonstrations would include seeking to be the first to develop (within a level 4 containment facility) self-replicating chemical or nano ecophages and a thoroughly lethal bioweapon.  For AI, they need to have a completely off-the-grid computer lab with a small internet and then intentionally attempt to develop an accelerating seed AI but have an automatic kill switch for when the complexity or activity exceeds a certain level.  It would be far better for all if the first such threats were developed in a secure environment.

    Any delay in the first (and only) existential event will provide more time to establish an off-Earth self-sufficient colony.  But I would have that on the Moon rather than in Earth orbit because of the greater resources on the Moon.

  • nealu

    Hi John,

    Author of the article here. While I agree it's unlikely that Lifeboat (or anyone) can anticipate future existential risks with 100% accuracy, I don't think we need that kind of protection. The fascinating thing for me is that Lifeboat is carrying on research into these topics in a nurturing environment and providing academics, experts and others with the tools necessary to network and share their opinions. God willing, mankind won't have to worry about killer robots or grey goo anytime soon.. but the fact that we do have talented thinkers devoting their spare time to contemplating these issues is, simply, a great way to spur out-of-the-box innovation.

  • Stephen Russell

    Singularty Job Fair with Conference would be awesome.
    Sites for:
    JPL, Pasadena
    KSC FL
    Pt Mugu CA.
    Mojave CA
    Irvine CA
    Las Vegas NV
    Redtone AL area.
    Houston Manned Space, TX.

  • Stephen Russell

    Need a Job Fair to get these projects moving ahead Big Time & PR on Science, SyFy channels alone or PBS Nova.
    Have rep on Science Channel alone.
    Must have.

  • Richard Pauli

    Wonderful lecture.   But I am left with an important question:

    How did you come to decide the global warming is not the greatest existentialist risk that we face as humans?

    From what I read, there are many, many climatologist, oceanographers, geologists, who think that global warming scenarios indicate real risk.  Even IPCC and the AGU and the Union of Concerned Scientists

    How is it that you decided not to include that risk?

  • Richard Loosemore

    I predict that Wolfram will shortly be LEAVING the Lifeboat Foundation in a big hurry....   The LBF was exposed by several of its members earlier on this year as little more than a front for the quasi-fascist activities of its founder, Eric Klien.  Read the article at http://ieet.org/index.php/IEET... if you want to see all the gory details.

  • Smann

    In the end the Robots are machines and could be disabled remotely by button. 
    SMann 

  • @silverton

    "Effective human immortality will undoubtedly be achieved. It'll be the largest discontinuity in human history. I wonder what's on the other side, though. I mean, so much of society and human motivation and so on, is tied up in mortality. So the question is, what will we choose to do?" - Stephen Wolfram

    For my part, one step on the other side of any kind of singularity, any even infinitesimally empathetic human will demand basic sufficiency in food, housing, and basic healthcare for every human being. Like Stephen, I prefer working on the building blocks we can deploy in the here and now. Basic Income is both mathematically achievable, and sociologically imperative, today.

    http://j.mp/WhatIsBasicIncome

  • gbacoder


    This organisation is not costing a fortune (as in NASA) and could prevent really big problems in the future. 
    They must make sure their message will be listened to at the time. One of the troubles we have seen with the EU is that no one wanted to speak out on some nations spending way too much. It seemed like it was boom time, and it may not matter. And it seemed that getting on was important. This made them at the time not act to prevent a problem. The psychology of the future governments will have to be considered, so the message from these think tanks will get through. There may emerge a culture against speaking out against technology for example.New technology does come with problems and by thinking ahead we may be able to stop them. All the more important as it suddenly starts accelerating, and few stop to think.  General concepts and principles about what could happen can be predicted now. And then solutions brainstormed. I agree with a commenter that reading appropriate science fiction would be a good idea. Some of   the authors have some good ideas for the kinds of things that may happen in the future. Andrew Norris 

  • gyrfalcon

    "Really smart people working together to prevent distant-future global catastrophes—what's not to like?" --- Richard Geller

    While I'm fine with groups that try to help others, there is a distinct possibility that an organization like this will function to serve the elitists.  Gated communities in space, population control, etc...

  • Richard Geller

    Really smart people working together to prevent distant-future global catastrophes—what's not to like?
    Really smart people and us too working together to prevent near-term global catastrophes—would be very, very nice. In the meantime, we've got Congress. Yeah, not working for me either.