Critical Thinking with Paul Scharre: Humans Out of the Loop

Our interview for the April – June Critical Thinking with Paul Scharre—he’s a Ranger vet who did tours in Iraq and Afghanistan who is now a robotics expert at the Center for a New American Security—covered so much good material, but we didn’t want to leave it on the cutting room floor. Here’s Scharre on humans out-of-the-loop.

Army AL&T: What work is being done on armed unmanned systems (without a human in the loop)?

Scharre: There’s a very robust discussion going on internationally and within certain sectors of the U.S. defense community, particularly among lawyers, about this idea of autonomous weapons that would do targeting on their own and are programmed by people but would be released into some area, like a mobile mine that hunts targets on its own. It raises a lot of very difficult legal issues, some ethical challenges and some very practical operational considerations about things like risk and fratricide and, in some cases, strategic stability in crises.

I’m writing a book on this, actually. It’s a very interesting topic.

Army AL&T: Is there anyone out there who’s really doing well at getting a handle on these questions? Does the Pentagon have a good grasp on it?

Scharre: I think the people in the Pentagon are thinking about the issue. A number of … senior leaders have spoken about this publicly. Deputy Secretary of Defense Bob Work has talked about it. Vice Chairman Gen. Selva [Air Force Gen. Paul J. Selva, vice chairman of the Joint Chiefs of Staff] has spoken about this publicly. Right now our vision is that humans are going to be in the loop. But we acknowledge that technology might take us to a place down the road where particularly pressures of speed may push us to confront some of these difficult questions, especially if our adversaries show less restraint.

We already have modes on air and missile and rocket defenses today that allow autonomous reactions of the incoming threats, whether it’s an air defense system or an Aegis weapon system on a ship or active protection systems for vehicles. We have things where there’s a mode where you turn it on and a person’s supervising it and can turn it off. But the speed and pace of threats are such that you have to automate the reaction. I would submit that that is extremely different when you start thinking about a robotic system and it’s offensive, hunting for targets on its own with no person there to supervise it and turn it off in real time.

The magnitude of risk there is vastly different. Well, people say we already have this. We already have this with an Aegis. No, we don’t, because what we’re talking about is taking an Aegis weapon system on a ship, turning it to full auto-special, turning the ship toward enemy terrain and having everybody get off the ship—and just saying, well, let’s cross our fingers and hope everything goes fine. That is a very different world from what we have lived in for the last several decades. And sometimes that is underappreciated in the community that thinks about these issues. People say, well, we’ve done full auto before, what’s the big deal?

It’s very different when we’re talking about a world where we don’t have real-time human supervision, we don’t have positive control. You get something like a Patriot or an Aegis. They’re not flawless. We’ve had accidents, but people have very direct, immediately ability to reassert control over these weapon systems if there are accidents. And you would not have that with a fully autonomous robotic weapon that’s going out and hunting targets.

That doesn’t mean it’s inherently illegal, but it does mean that we would want to think really hard about risk. But that conversation is starting—mostly it’s in the legal space. Thinking from a military operational standpoint, it’s less mature. Why would I want to do this? This isn’t a good idea. Never mind is it legal, it’s not illegal to make a hand grenade with a half-second fuze. It’s just a bad idea. Is that level of autonomy a good idea in the first place? I think that’s a good starting point.

Read the full interview at http://usaasc.armyalt.com/#folio=90

An unmanned boat operates autonomously during an Office of Naval Research-sponsored demonstration of swarm-boat technology at Joint Expeditionary Base Little Creek-Fort Story, Virginia, in September 2016. While autonomous operation holds much promise, Scharre noted that several thorny issues—legal questions, ethical challenges and operational considerations—must first be addressed. (U.S. Navy photo by John F. Williams/Released)

GO FOR FULL AUTO?
An unmanned boat operates autonomously during an Office of Naval Research-sponsored demonstration of swarm-boat technology at Joint Expeditionary Base Little Creek-Fort Story, Virginia, in September 2016. While autonomous operation holds much promise, Scharre noted that several thorny issues—legal questions, ethical challenges and operational considerations—must first be addressed. (U.S. Navy photo by John F. Williams/Released)

Subscribe to Army AL&T News, the premier online news source for the Acquisition, Logistics, and Technology (AL&T) Workforce.