Transcripts For CSPAN3 Public Affairs Events 20161108 : comp

CSPAN3 Public Affairs Events November 8, 2016

Thats my first point. My second point is its going to be relatively easy to field. Again, we discussed this in email before the panel. Most Autonomous Systems today are still stationary, both. Why . Because movement for Autonomous Systems is complex. Its difficult. You have object avoidance. You have lots of different types of capability. Now the most oldest autonomous Weapon System on the planet is the land mine. Now some people would say its semiautonomous because of the way it works, but if you want to go into detail, if you take the acoustic land mines deployed in the 1950s and 1960s, there were Small Computers on board on enemy ships and they would only target enemy vessels with a particular acoustic signature. They were actually, i think, the first really solid autonomous Weapon Systems in the world. Those have been around for a long time. However, they dont actually go around and try to find targets. That adds a level of complexity, which is huge. So the autonomous machine guns connected to land radars, which the South Koreans use and other countries, those are staying in one place. When you go into a territory where a machine has to learn the environment and this is a very complicated machine view experiment. Looking at territory it does not know and identifying a human from nonhuman and friend from foe and to do that, that is a complicated experiment. Now, i say that with one final comment. Im not even 100 sure the rules we have actually fit robots. And ill explain why. You see, we built the rules for combat today for humans, and they come with a few hidden assumptions. One, human beings make mistakes and were okay with that. We accept a certain level of risk in combat for human soldiers. Youre allowed to make a mistake if youre a soldier. Its not a war crime to make the mistake. Its a war crime to do something really nasty. I tell you a very sad sortory. In one of the Israeli Military operations 14 years ago, there was the terrorists had fielded oneton ieds you shouun roads to blow up tanks. And tanks couldnt withstand the blasts and they were blowing up. One israeli tank was traveling in the location and they were really on guard for that event. Sunday they had this huge boom from the bottom of the tank. They were sure they had gone over an ied. They were searching for the turret. They look into the periscope and they see two people running away from the site. They shoot them and manage to hit them. Only ten minutes later did they realize it wasnt an ied. The tank had gone over a huge boulder, which hit the bottom of the chassis of the tank and it sounded like a huge explosion. The two people were innocent. The reality is in a combat situation the crew had killed two innocent people because they were in a combat situation. There was a military courtmarti courtmartial. They werent found guilty. Are we willing to give computers the benefit of a mistake . Now, remember human beings get selfdefense as a defense in criminal proceedings. Are we going to give autonomous selfdefense, the defense of necessi necessi necessity . All of our system is geared for human beings, so the bottom line is not only is it difficult to train the robots for the rules. Im not 100 sure the rules are ready for Artificial Intelligence. Im processing that because i think youre onto something obviously thats extremely important in terms of this whether its a challenge to develop new rules to deal with ai or new rules to deal with an even broader category and what the expectations are. My sense is we can all learn a lot in terms of how we think about this and how we might think about it at coming at it from the liability side rather than trying to define autonomy or nonautonomy. Nonautonomy should be avoided and how do we go about it. I think if we go about it in a case law sense its enormous. I want to come up with a number of things that you said. Does it from any one perspective is the distinction between stationary and mobile an important distinction . If one thinks about prohibitions or what could be avoided, does it matter . And then relatedly, the defense of ones territory versus action out of ones territory. Do you want to jump in on those two points . Can i say a couple of things . Sure. I think the feel was that stupid Autonomous Systems will be deployed before the actual intelligent ones, so that is not acceptable. Well, that is actually presuming, i feel, the sting of the weapons and the people who are deploying them are doing it in an irresponsible manner. Thats a field which is there, but i think thats way the inductions of the technology in the armed forces are done. We have to look at what is inherently wrong with fully autonomous Weapon Systems. It is not autonomy were campaigning against. Its fully autonomous Weapons Systems and meaningful human components is whats being looked at. And there is a weakness in what does this fu is this fully aus weapons system. Weapons systems that are fully autonomous are those which can select and engage targets without human intervention. That and is important. What is meant by selection . Only selection is acceptable. Only engagement is also acceptable. Only selecting is also acceptable. But selecting and engaging is where the question is being drawn, and the reason behind that is between the selecting and the engaging theres a Decision Point. And that decision to kill is what is felt today that the machine the decision should be left to human is one point of view. I would like to make the point that if were looking at various technologies, the kill chain as we talk about where you first identify you navigate to objects in fact, if we look at the narrative of 2009, it specifically brings out this. In all these functions, autonomy is permitted. Nobody would even object to it. It is only the decision to kill. The point which i really want to make is the complexity of ai is not going into that decision loop. That decision loop is an aspect of it. As long as a human is there, theres no technology involved in bringing the human into the loop. That is one point, which i wanted to make. Because if were thinking in terms of banning technology, it is trivial as far as the technology is concerned. Coming to the question that you asked. Defense and offense actually, my military person would know its not defense and offense. Theres no difference between theres an aspect of mobility coming into it. The Autonomous Systems which are meant for defending and going into offense would really be of the same nature. I dont see any difference between the two. What about territoriality . You can avoid that distinction by saying you can operate it on your territory, but not outside your territory. Okay. Ill elaborate a little more on that. I brought out this aspect of conventional warfare visavis four scenarios. Ill take an example from india. If you have a line of control, theres a sanctity and it cannot be crossed. If were looking at that scenario, then if you try to defend, that defense involves theres not much more mobility. You could have nonmobile robots looking at the defense. When we have gone into a conventional operation, then when youre talking about defense, youre also talking of mobility. So you attack. What im saying is depending on which backdrop you are looking at, defense may or may not involve mobility. Thats why im saying that in general to try and draw a distinction between offense and defense may not be very correct from a Technology Point of view. However, it would be more acceptable to those who do not want to delegate to the machines. Ill ask daniel to comment on the territoriality thing. Im thinking of the iron dome which operates over israeli air space. Then theres the wall. Im thinking of analogies because the general is talking about the line of control, which celebrates the part of kashmir that india controls. You can imagine that kind of boundary being a place where one might put autonomous weapons to prevent infiltration thats not supposed to be coming across. On the other hand, presumably, like the last month when indias theres movement going back and forth, you might want to turn those systems off so you dont hurt your own people. Given israels experience and your experience there, does the distinction of territoriality matter practically or legally or no . If we take, for example, the iron dome system, it has been made public the iron dome system has three different settings. You have the manual setting, the semiautomatic, and the automatic setting. And its a Missile Defense system, right . And the idea is you want to shoot down the missile over a safe location, so part of the algorithm there is to know the israeli system works like this. Then it calculates what it is going to hit because it is on a ballistic trajectory, so its not going to deviate from its track. You automatically do a lot of things. Then it calculates if its going to hit in a dangerous place, where to shoot it down so that it minimizes damage. Theoretical theoretically, boundaries are not relevant for that. You can catch the missile early enough, we wouldnt care if it landed in another country eassr countrys territory. The idea being that the system is not supposed to take boundaries into consideration. Its supposed to take human lives into consideration. My gut feeling the stationary versus the mobile issue is just a technological difference of complexity and the geography is not a real issue. Although again following the generals footsteps, i think people will be more easy to accept the fact you would feel such things in your territory when you have sent them into another country. On the moral Public Opinion side, there are arguments to be made that these are additional steps down the road. But from a technological and even from a legal side, i dont really think that there is that distinction. I dont think it holds. On the complexity, let me mention again the systems which well be targeted, let us say mobile targets, would be many more times complex. Let me just paint a picture. Again, ill take this example from conventional warfare. For example, you have in an area of 10 kilometers by 10 kilometers a battle. It is a contested environment where there are no civilians present. Now this has to do with military capability. Here are the models. So today how this battle would be fought is another 100 tanks would be contesting amongst each other. The blue forces would be destroying the tanks, so theyre on par, the two states. Lets say one has Ai Technology and you have piloted autonomous armed drones instead of tanks. Now im trying to analyze as to what is the complexity as compared to todays technology of these armed drones picking up those tanks and destroying them. I think the complexity gap is hardly anything for technology, which is there. Drones are already in place. You only have to pick up tank signatures in a desert. So if a country develops a military capability, those lives would be saved. In such a scenario the complexity is not there. The complexity is there where theres a terrorist, which is mixed up in a population. Its mixed up in a population and sort of distinguish between a terrorist theres no external distinction at all, so thats a complex problem. I just wanted to comment on that complexity. Mary, come in and sort all this out for us. Thanks. Im just thinking about the international talks weve been participating in the last three weeks of talks over the last three years. They look for points of Common Ground where the governments can agree because there are about 90 countries participating. At the last meeting they said fully autonomous Weapons Systems do not exist yet. There was pretty widespread acknowledge that what were concerned about, the lethal autonomous Weapons Systems, are still to come. And the other thing the states seem to be able to agree on is International Law applies, International Humanitarian law applies, the testing of your weapons and doing that through article 36 of course applies to all of this and the kind of notion of what are we talking about. Were talking about a Weapon Systems that selects further human intervention. What they havent been able to do yet is break it down and really get down into the nittygritty details here. I think thats where need to spend at least a week to talk through the aspect of the elements or the characteristics that are concerning to us. Is it mobile rather than stationary . Is it targeting personnel rather than material targets . Is it defensive or offensive . Although those words are not so helpful for us either. What kind of environment is it operating in . Is it complex and complicated like an urban environment . Are we talking about out at sea or out in the desert . What is the time period in which it is operating . Because its no coincidence this campaign to stop killer robots was founded by people who worked for the campaign to stop interpersonal land mines because were concerned one of these machines could be programmed to go out and search for its target not just for the next few hours, but weeks, months, years in advance. Wheres the responsibility if youre putting a device out like that . That sums up the break time we need to have in the process to really get our heads around what are the most problematic aspects here because not every aspect is problematic, but that will help us decide where to draw the line and how to move forward. If states have agreed that the laws of Armed Conflict and International Law would apply, it seems to me thats a different circumstance than if they dont agree. Dan is shaking his head. Tell me why youre shaking your head, but pick up on this too. Marys absolutely right. You know, when i grew up, there was a u. S. Band called supertramp. Sure. Were dating ourselves. Probably when you were a teenager. One of my favorite songs when i was growing up was the opening lyric take a look at my girlfriend, shes the only one ive got. We dont have a plan b. As a very oldtime International Lawyer who deals with this issue, i dont have an alternative set of rules to apply to this situation. So we have no choice but to say at the International Convention we will apply existing rules. We dont know how to do that. Right. Thats one of the problems. The rules dont work as easily on robots as they did on humans and they dont work on humans as easily as we thought they would. In reality when well be asked to translate that into reality, well have a huge new challenge. Thats one of them. Let me jump right in on this and we can continue it as a conversation. That seems to me one of the strongest arguments for at least a pause, if not a ban, a moratorium, to the extent what you just said obtaining. The argument is lets wait until we can sort this out then. Tell me whats wrong with that, or if anything, whether the problem is it is not practical, but from a legal point of view. I am also a cynical International Lawyer, okay . And the reason i am is because i used to do this for a living. International law is often a tool and not an end. If you look at the list of the countries participating in the process, you will not be surprised that the primary candidates for fielding such weapons are less involved than the countries who are not supposed to be fielding those weapons. In fact, if we take the land mine issue as a specific example, the countries who joined the antipersonnel land mine regime, the world is divided into two groups. Wi as a result, it is not a rule of International Law. It is only binding on the member states, which creates a very bad principle of International Law, which is International Law is different for every single country. This is part of International Law. It is how the system works, but it is one of the fallacies of the system. For example, for canada, its unlawful to develop or field an antipersonnel land mine, but for israel it is totally legitimate to do so. If israel and canada could fight, canada could not, which shows you how stupid International Law could be. I say that to tell you what happens with autonomous Weapons Systems. I know who is going to field them. The countries are going to field them are not the countries that are going to be administering any type of results from that process. And the last thing i want to have happen is the normal countries who have very complicated projects and approval processes for fielding weapons like india, who came up with the robotic revolution 15 years ago, they took this problem on board as one of the issues they need to tackle with. I would trust them much more to handle this issue effectively than a country where i know they dont care about the Collateral Damage as much. So my problem with the proposed ban is my concern is it will achieve the opposite result. The good guys who will take care only to field systems after they know they can achieve all of the good results we think they can wont field them until theyre ready with a small mistake probability. But the other people will field them earlier, and that is not necessarily a reality i want to live in, so that is where i come in on the discussion. Mary, how do you respond to that . Treaty were talking about is a geneva based convention. All the countries that are interested in developing autonomous Weapons Technology are participating in it, so nothing would be discussed in this body without the agreement of all of these countries. We do have china, russia, the United States, israel, south korea, and the u. K. In there debating this subject. And just to come back on the land mine side, we do have 162 countries that have banned these weapons. We have gone from 50 countries producing them down to 10 as a

© 2025 Vimarsana