search.noResults

search.searching

dataCollection.invalidEmail
note.createNoteMessage

search.noResults

search.searching

orderForm.title

orderForm.productCode
orderForm.description
orderForm.quantity
orderForm.itemPrice
orderForm.price
orderForm.totalPrice
orderForm.deliveryDetails.billingAddress
orderForm.deliveryDetails.deliveryAddress
orderForm.noItems
COMMENTARY


A NEW NORMAL


As AI technology advances, so does the need for a serious discus- sion about the moral and ethical implications of it—one that considers current morals and ethics and accounts for how those societal norms will shift over time. (Image by Getty Images)


helps identify targets by classifying people in view as threats or nonthreats as well as indicating the relative location of “friend- lies” and mission objectives.


Networking capabilities will further allow automated coordi- nation to assign priority targets to individual Soldiers so that all targets are eliminated as efficiently as possible and time is not wasted by having multiple Soldiers engage the same target. Networked smart weapons will also allow logistics systems to automatically initiate resupply actions as soon as combat begins, providing just-in-time logistics all the way to the forward edge. Supply and transportation assets will be able to begin rerouting truckloads of supplies across the battlespace to the point of need. At the tactical level, small robots will be able to bring loaded magazines to individual Soldiers as their weapons reach the end of their basic combat load.


TOUGH ETHICAL QUESTIONS All the above is coming in the next 10 to 20 years. Te technol- ogy exists, and it is simply a matter of time, development effort and cost-benefit ratios.


Even more automation is possible in the future. DOD and society at large will be faced with complex questions as this technology continues to grow. For example, it is already possible to include AI safety features that can prevent a weapon from firing at certain


“wrong” targets—that is, not firing at targets the AI system does not classify as an “enemy”—to decrease collateral damage or to prevent enemy use of friendly weapons. Tis, however, raises a very interesting question: What does it mean for a weapon to be fail-safe? What error rate makes it “safe” for a weapon to poten- tially not fire when a Soldier pulls the trigger?


Some have raised concerns about increasing autonomy in weapon systems. Groups such as the Campaign to Ban Killer Robots and the International Committee for Robot Arms Control have


called for total bans on the research and development of autono- mous weapons and for limiting AI research to civilian uses only.


Such calls for a ban on development of autonomous lethal weap- ons, however well-meaning, seem to ignore the fact that the technology they most seek to prevent (autonomous machines that indiscriminately kill humans) already exists. Autonomous armaments that can find and kill humans will appear on the battlefield, even if not introduced by the United States or another major state, because the required technology is already available.


Te reason we do not see major armies deploying such systems is because of a lack of the ability to discriminate between legiti- mate and illegitimate targets. Research and development in this area is in its infancy and is intertwined with needed policy deci- sions about how to precisely define a legitimate military target. Stopping research into autonomous weapons now will not prevent “slaughterbots” that indiscriminately kill; it will only prevent responsible governments from developing systems that can differentiate legitimate military targets from noncombatants and protect innocent lives.


WHAT ABOUT HUMAN ERROR? We must consider the fact that humans make mistakes about using lethal weapons in combat. Te U.S. bombing of the Doctors Without Borders hospital in Kunduz, Afghanistan, in October 2015 and the hundreds of thousands of civilian casual- ties in Iraq and Afghanistan attest to this reality.


We essentially still have the same “version 1.0” human that has existed for roughly 200,000 years, and capability development in humans is relatively flat. Our decision-making error rate in life- or-death situations is likely to be constant. Machine accuracy, on the other hand, is improving at an exponential rate. At some time in the future, machine accuracy at making combat-kill decisions will surpass human accuracy. When that occurs, it raises a host


https://asc.ar my.mil 119


Page 1  |  Page 2  |  Page 3  |  Page 4  |  Page 5  |  Page 6  |  Page 7  |  Page 8  |  Page 9  |  Page 10  |  Page 11  |  Page 12  |  Page 13  |  Page 14  |  Page 15  |  Page 16  |  Page 17  |  Page 18  |  Page 19  |  Page 20  |  Page 21  |  Page 22  |  Page 23  |  Page 24  |  Page 25  |  Page 26  |  Page 27  |  Page 28  |  Page 29  |  Page 30  |  Page 31  |  Page 32  |  Page 33  |  Page 34  |  Page 35  |  Page 36  |  Page 37  |  Page 38  |  Page 39  |  Page 40  |  Page 41  |  Page 42  |  Page 43  |  Page 44  |  Page 45  |  Page 46  |  Page 47  |  Page 48  |  Page 49  |  Page 50  |  Page 51  |  Page 52  |  Page 53  |  Page 54  |  Page 55  |  Page 56  |  Page 57  |  Page 58  |  Page 59  |  Page 60  |  Page 61  |  Page 62  |  Page 63  |  Page 64  |  Page 65  |  Page 66  |  Page 67  |  Page 68  |  Page 69  |  Page 70  |  Page 71  |  Page 72  |  Page 73  |  Page 74  |  Page 75  |  Page 76  |  Page 77  |  Page 78  |  Page 79  |  Page 80  |  Page 81  |  Page 82  |  Page 83  |  Page 84  |  Page 85  |  Page 86  |  Page 87  |  Page 88  |  Page 89  |  Page 90  |  Page 91  |  Page 92  |  Page 93  |  Page 94  |  Page 95  |  Page 96  |  Page 97  |  Page 98  |  Page 99  |  Page 100  |  Page 101  |  Page 102  |  Page 103  |  Page 104  |  Page 105  |  Page 106  |  Page 107  |  Page 108  |  Page 109  |  Page 110  |  Page 111  |  Page 112  |  Page 113  |  Page 114  |  Page 115  |  Page 116  |  Page 117  |  Page 118  |  Page 119  |  Page 120  |  Page 121  |  Page 122  |  Page 123  |  Page 124  |  Page 125  |  Page 126  |  Page 127  |  Page 128  |  Page 129  |  Page 130  |  Page 131  |  Page 132  |  Page 133  |  Page 134  |  Page 135  |  Page 136  |  Page 137  |  Page 138  |  Page 139  |  Page 140  |  Page 141  |  Page 142  |  Page 143  |  Page 144  |  Page 145  |  Page 146  |  Page 147  |  Page 148  |  Page 149  |  Page 150  |  Page 151  |  Page 152  |  Page 153  |  Page 154  |  Page 155  |  Page 156