gates

The unknown future rolls toward us. I face it for the first time with a sense of hope, because if a machine, a Terminator, can learn the value of human life, maybe we can too.

That’s of course a line from Terminator 2, a film that – by law – must be referenced when you’re talking about the possible future dangers of rogue artificial intelligence. The subject has come up once again, this time with Bill Gates addressing it in a Reddit AMA that was hosted earlier today. Gates was essentially asked if machine superintelligence posed a threat, something that’s been discussed in the recent past by thinkers like Elon Musk and Stephen Hawking. Gates’ response was unsurprisingly alarming.

“I am in the camp that is concerned about super intelligence. First the machines will do a lot of jobs for us and not be super intelligent. That should be positive if we manage it well. A few decades after that though the intelligence is strong enough to be a concern. I agree with Elon Musk and some others on this and don’t understand why some people are not concerned.”

I don’t share Gates’ confusion about why there isn’t more concern. We live in a world that’s dominated by conventional, rudimentary thinking. I personally live in the United States, a country run by a congress that thinks the Internet is a “series of tubes.” I know that’s an outdated reference, but it’s still every bit as applicable as the median age for both the Senate and the House is actually rising. It’s not my intention to come off as agist, but so many of our world leaders have displayed such a startling misunderstanding of technology that it’s really no surprise that we’re not taking artificial intelligence as a potential threat.

I suppose all of our dystopian humor isn’t helping paint a serious picture, but I’ll be damned if I’m going to give up my Terminator jokes.

I’ll be back.

Share This With The World!