Robot Bombers

There’s a lot of disturbing stuff in the news this week, but I’m going to go back in time to comment on a theme that has been portrayed in science fiction for decades. That’s right, we’re going to talk about killer-robots.

We all look forward to the friendly robots portrayed in Asimov’s Foundation series and brought to life in the series “Humans.” Robots that are so lifelike that they are given some sort of tell like purple eyes that will identify them at first glance from humans. Friendly robots that are designed to make humans feel comfortable around them, sometimes extremely comfortable, and pose no threat to their human masters. All of us science fiction aficionados would expect that the robots would be bound by Asimov’s laws of robotics:

First Law: A robot may not injure a human being or, through inaction, allow a human being to come to harm.

Second Law: A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

Scads of plots for books and movies have been produced testing the boundaries of when the robots become self-aware and question whether or not they have the right to survive. Even the kids movie “Short Circuit” attempts to dive into the question of life and to what ends life must be preserved. Which brings us to the big question, the preservation of life.

On July 8th, 2016, the Dallas, Texas police force used a robot to kill a suspect. In a scene straight out of “RoboCop“, the Dallas police sent in a robot bomb to kill an armed suspect. Intellectually, I know that dead by SWAT is the same as dead by robot bomb, it just feels less sporting, or something.

Of course, I always question why time limits get put on these things. I also question why a “flash bang” couldn’t have been used. If the purpose was to disarm the suspect, there were probably a couple of lesser methods of intervention that would have preserved life. I suspect the answer goes more to the “time” issue rather than the loss of life.

In my mind, if the perpetrator is pinned down, and you know he’s not going anywhere, wait until he goes to sleep. All of us, no matter how deranged, eventually have to fall asleep. I guess waiting makes the police look like they’re not in charge or something.

On the flip side, the police have got this cool new toy that they’ve just been waiting for the right situation to justify its use on the public. Will future criminals be deterred from committing crimes for fear of a piece of machinery potentially taking their life? Think electric chair and how it hasn’t kept the public safe from criminals with nothing to lose.

Where does all of this go? Hard to say. With ever continuing advances in A.I. it won’t be long before the killer-robots will be given decision making processes that will be separate from their operators. Will the programmers have the foresight to program in Asimov’s rules, or will they deem that they, and by extension the robots are smart enough to make decisions on the ground without any system overrides?

I’m of the opinion that the programmers will believe in their own infallibility and let the robots shoot or blow-up as they see fit. As the military industrial complex has pointed out time after time, it’s hard to sell a product that performs correctly “most of the time.” So, selling the killer-robots will be seen as an example of “next wave technology” that “saves human lives” and takes some of the “risk out of police work. It’s never an ethical or moral issue, it’s a marketing problem.

Skynet looms ever closer.

Visited 7 times, 1 visit(s) today