Feb 18 2017

Shutting the Door on the Future

The concept of a Technological Singularity is becoming current again, with various pundits and techno-billionaires warning us of the potential consequences. Science Fiction writers, in particular, have quite a hard time with the whole thing.

Charles Stross and Jo Walton both famously referred to the Singularity as “the turd in the punchbowl of Science Fiction.” It’s the thing that takes all the credibility and much of the fun out of writing in the genre – at least if you want to set your story anything more than fifty years in the future. In fact, if you go by Vernor Vinge’s original formulation 2030 is the magic year when trends go vertical. (Yeah, that’s right, Ray Kurzwiel and Hans Moravec didn’t invent the idea. Vinge didn’t either, he just provided a concise description and gave it a catchy name. For the origins you have to go all the way back to I.J. Good and Turing.)

Why? Simply because a plausible Technological Singularity means you can’t claim your story is anything other than fantasy with space ships. Hard SF? Hardly. Space Opera? Don’t make me laugh. If the whole point of a Technological Singularity is a rather sudden rise of intelligences orders of magnitude smarter than we are, how can we even imagine what it would be like? We become ants writing books about beings who live in houses and drive cars. How would an ant even envision a house or a car?

Basically, if you accept that a Technological Singularity is even a moderate possibility you expose nearly the entire Science Fiction genre as something other than an honest extrapolation of the future. Perhaps not a bad thing for us to admit it’s really just entertainment, but it does remove a bit of the luster.

The thing is, while writing seriously extrapolative post-singularity fiction is basically impossible, there isn’t anything keeping people from taking a whack at it anyway. Of those who have Kathleen Goonan, Greg Benford, Charles Stross, and many others have written some very readable stuff in that vein. Vinge himself has turned in several very credible novels set after and even slightly before a singularity event.

And right there is one way to avoid the problem: set your story before the event. Basically everything William Gibson has written is set right before the cusp of a Singularity and Neal Stephenson has followed along behind him with tropes even Gibson couldn’t squeeze in. (I should mention Pat Cadigan and Bruce Sterling here as well.) There is also the more oft-taken route of explaining why a Singularity didn’t happen at all or simply ignoring the whole problem and charging ahead at warp 9, phasers blazing.

Post singularity stories often fall into two camps: (a) normal (or at least normal enough for an entertaining story) humans wandering lost in a world they can longer comprehend, a world filled with magic, or (b) some plausible reason a Singularity hasn’t happened for the protagonists even if it did for everyone else, while still giving them a future with space ships and ray guns. (Stross again. Also Ian Banks and Ken MacLeod.) Vinge even invented a magical ‘Slow Zone’ that limited transcendent intelligence in much of the galaxy for some of his stories.

But how plausible are these scenarios really? What would keep a singularity event from happening? Well, there is some question about the plausibility of the entire concept in the first place; maybe there is some natural limit to intelligence or some other factor that keeps even a soft-takeoff Singularity from occurring. But, aside from that, what could we do to avoid one? Do we create laws against transcendent intelligence and bureaucracies to enforce them, like Gibson’s ‘Turing Police?’ That doesn’t seem like a reasonable answer to me as, if nothing else, Nation States will still want their own AIs even if they recognize the dangers. In the end transcendent intelligence becomes the next atomic bomb and look how well we’ve done limiting those. (Actually pretty good, and yet we’ve still got enough nuclear weapons to wipe ourselves off the planet two or three times over.)

So, what else would shut the door on the future but leave humanity intact? Well, there’s those nuclear weapons; a war just big enough to knock us back to the early iron age isn’t impossible. But does that qualify as ‘intact’? Is there no way to retain a technological society without the danger of of a Technological Singularity?

There was a time when I wrote a lot about Technological Singularities and Transhumanism myself. In truth I’ve been steeping in the relevant ideas since the mid 1980’s. By the late 1990’s I stopped going to panel discussions on the subject because I often knew more than the panelists and I didn’t want to be ‘that guy’ who sticks his hand up and says, “Well, actually…” Not long after that I started being put on said panels myself, but that’s another story. (Speaking of stories, maybe I should republish a couple of my older Singularity/Transhumanism stories on my blog as I’m unlikely to find another home for them that won’t disappear in 5 years, leaving me nothing to link to from this essay. Grump.)

Mind you, a Technological Singularity isn’t an article of faith for me, but it is an interesting concept; one that has colonized quite a bit of my thinking. I’ve certainly wrestled with it as the ‘turd in the punchbowl’ for my own SF stories. I really don’t want to give up and just write space opera that side-steps the issue; I want to write stories that realistically depict humans living in space or otherwise making their way in an actually possible future.

Because of this I’ve put quite a bit of thought into ways to delay a singularity event long enough for humans to at least colonize parts of our own solar system. This goes beyond instituting some kind of a drag intended to allow a ‘soft takeoff‘ scenario, as the goal is to push the singularity event out to somewhere near the end of this century, thus allowing me to write plausible Hard SF stories about asteroid miners and cannibalizing Mercury into a ring of solar power stations and research stations on Io and so on.

What I eventually came up with was a nearly complete societal backlash against science and technology. One driven by religion and populist politics and a retreat to ignorance; leading to a distrust of experts and facts. In the beginning this would be a local phenomenon with governments passing laws against certain technologies or against teaching certain facts and eventually becoming world-wide after a couple of rather nasty wars and terror attacks with high-tech weapons result in the establishment of a world government with technology-limiting powers.

In this scenario technology and science do not go away, they are simply distrusted by the public and become tools of the state; twisted into weapons and ways to control a public who are mostly kept ignorant. Private individuals with enough wealth and political pull continue to do whatever the hell they please, while everyone else is manipulated into riots whenever someone without enough wealth or connections tries to do the same.

Things like brain implants do happen under this scenario, but they are for the rich and the connected living in the ‘free cities’; places allowed to innovate under the watchful eye of a world government ready to nuke them into oblivion if it looks like they are going too far. A lucky few escape the planet and it’s despotic rulers entirely and form new societies around the solar system. Of course eventually the singular tide does roll in, but first there’s some room for the stories I want to write.

Thing is, did you notice anything familiar about what I am describing, at least in it’s early stages? Yeah. If you’ve paid any attention at all to current events you know what I mean. Does that mean the rest of my future history is actually plausible? I don’t know. I really don’t. But I would be very, very sorry to see it happen. Why? Well, I just described a rather nasty dystopia of the Big Brother variety. It might be a good place to set stories, but it isn’t a future I want to live in.

Does shutting the door on one kind of future tend to shut the door on all futures? Can such a door actually be closed entirely? Would we even want it to? We might be finding out the answer to these questions. We might even find the answer we get is also the answer to Fermi’s big question