TII: Our Final Invention?

Hosted by
If you enjoy this episode of the new program by Matt Miller - This... Is Interesting, please consider subscribing. You can find new episodes on demand at http://kcrw.com/thisisinteresting and via iTunes, the KCRW Radio App or your favorite podcasting platform. Beginning in January 2014, This... Is Interesting will no longer broadcast on the Left, Right and Center feed.

 

Lately, I’ve become obsessed with an issue so daunting it makes even the biggest “normal” questions of public life seem tiny. I’m talking about the risks posed by “runaway” artificial intelligence. What happens when we share the planet with self-aware, self-improving machines that evolve beyond our ability to control or understand them? Are we creating machines that are destined to destroy us?

I know when I put it this way it sounds like science fiction - or the ravings of a crank. So let me explain how I came to put this onto your screen.

A few years ago I read chunks of Ray Kurzweil’s book, The Singularity Is Near. Kurzweil argues that what sets our age apart from all previous ones is the accelerating pace of technological advance – an acceleration made possible by the digitization of everything. Because of this unprecedented pace of change, he says, we’re just a few decades away from basically meshing with computers and transcending human biology (think Google, only much better, inside your head). This development will supercharge notions of “intelligence,” Kurzweil says, and even make it possible to upload digitized versions of our brains into the cloud so that some form of “us” lives forever.

Mind-blowing and unsettling stuff, to say the least. If Kurzweil is right, what should I tell my daughter about how to live, or even about what it means to be human?

Kurzweil has since become enshrined as America’s uber-optimist on these trends. He and other evangelists say accelerating technology will soon equip us to solve our greatest energy, education, health and climate challenges en route to extending the human lifespan indefinitely.

But a camp of worrywarts has sprung up as well. The skeptics fear that a toxic mix of artificial intelligence, robotics and bio- and nano-technology could make previous threats of nuclear devastation seem “easy” to manage by comparison. These people aren’t cranks. They’re folks like Jaan Tallinn, the 40-year-old Estonian programming whiz who helped create Skype, and who now fears he’s more likely to die from some AI advance run amok than from cancer or heart disease.

Or Lord Martin Rees, a dean of Britain’s science establishment, whose last book bore the upbeat title, Our Final Century. He, along with Tallinn, has launched the Center for The Study of Existential Risk at Cambridge to think through how bad things could get and what to do about it.

Now comes James Barrat with a new book, Our Final Invention: Artificial Intelligence and the End of the Human Era, that catalogues the risks and how a number of top AI researchers and observers see them. If you read just one book soon that makes you confront scary prospects that we’ll soon have no choice but to address, make it this.

Barrat, an Annapolis-based documentary filmmaker, notes that every technology since fire has had both promise and peril. How should we weigh the balance with AI? I think you’ll find our conversation fascinating and unsettling – and filled with issues that, for better or worse, we and our children are destined to deal with.

Credits

Host:

Matt Miller

Producer:

Laura Dine Million