comparemela.com

Book tv continues on cspan2. Television were serious readers. The Area Security initiative is a new hug for Energy Person or research of Artificial Intelligence. We are working to understand the effects of misuse and consequences and how aius change in global power dynamic in the governance model is meaningful to support the development for the a. I. Toward action toward the view of horizon and our goal is to help them identify the steps they can take today that will have an outside impact on future trajectory around a. I. Around the world. This helps support the Broader Mission for the cybersecurity which is to help individuals and organizations address tomorrows Information Security challenges and amplify the upside of the revolution. The center for human compatible a. I. Is a research lab base at uc berkeley aiming to reordinance the field of a. I. Toward beneficial systems. Through technical safety research. The faculty researchers keep the students pioneering Technical Research on puppets that include cooperative reinforcement lighting, misspecified, human robot corporation, value preference alignment, multi agent systems, and theories of rationality among other topics. Researchers use insight from Computer Science, machine learning, game theory, statistics and social science. We are thrilled to have the founder and director here with us to talk about his new book. Artificial intelligence and the problem of control. This book has been called the most important book on a. I. So far. The most important book ive read in quite some time, a must read in the book we have all been waiting for. Stuart russell is known to many of you and has been a faculty member for 33 years in Computer Science and cognitive science. He is also an honorary fellow in oxford. He is a coauthor of Artificial Intelligence in modern approach which is the standard textbook in a. I. And used over 1400 universities and 128 countries. Right now he is a Senior Fellowship one of the most procedures towards and social sciences. Last but not least he served for several years as a active professor of neurological surgery. He does have a license to operate. Also joining us for the discussion is richer, the financial time west coast editor. He is based in San Francisco and he leads a team of writers focused on technology in silicon valley. He writes about the Tech Industry and the uses of technology. Current areas of interest include Artificial Intelligence and the growing power of the tech platforms. The previous position of the Financial Times include various finances in london, the new york bureau chief and Telecom Editor based in new york. Professor russell and mr. Waters were in their recent and expected a. I. Implication. Including the expectation the a. I. Capability will eventually exceed those across the range of decisionmaking scenarios. We will hear about steps to ensure this is not just the future of Science Fiction but a new world that will benefit us all. We will hear from them for a half an hour and open it up for questions from the audience. After that we will break for a reception and we will have food and water and drinks available in the book human compatible will be available for purchase in professor russell has agreed to sign copies for those interested. I will turn it over to professor Stuart Russell and richard waters. [applause] thank you very much. Welcome, thank you for joining us. If you dont know, buy the book after the introduction. I dont know what would tell you. I recommend it. We will dig into as much as we can but we might hold back some secret so you action have to pay for as well. As a journalist one of the things that i found fascinating by the a. I. Debate is the complete skin and him amongst people who know what theyre talking about. On the one hand we have people say we will never get to human intelligence in those machines are perfectly safe. On the other hand we have the elon musk tendency and its a shame that as much as we admire him has run away with a scifi end of this debate and it needs to be anchored to something more serious. Im very glad to have this debate because what you have done is make us aware of the potential and risks while anchoring in a reallife solid understanding of the science and where we start from. I think this is a really good place to start the debate in the awful schism that we have right now. Since im a journalist i will dive straight in. So were here in berkeley. I know youre from a sunnier place. The other place. So you study the hundred year study of a. I. Which is a landmark that theyve made, to map what is happening in a. I. And anchor the debate in some reality that have not gone forward. Are you crazy insane that unlike in the movies there is no radar on the horizon or probably even possible. We would be denying that agi is even coming. So how do answer to that. Is this working . They could hear you and i dont think i could keep my voice at a high level long enough. So interestingly for the 70 year history of a. I. , a. I. Researchers have been the one saying a. I. Is possible. Usually philosophers are the one saying its impossible. So for whatever reason we dont have the right tubules in our a. I. Systems to become conscious or whatever it might be and usually those claims of impossibility have fallen by the wayside one after another. But as far as i know a. I. Researchers have never said a. I. Is impossible until now. What couldve prompted them, imagine the hundred year study is 20 distinguished a. I. Researchers giving their considered consensus opinion on what is happening and whats going to happen in a. I. Imagine if a biologist to the summary of the stage of field of Cancer Research and they said the cure for cancer is not on the horizon and not even possible. You would think what on earth would make them say that. And we have given them 500 billion of taxpayer money over the last few decades and now they tell us the whole thing was a con. I dont understand what justification there could be for researchers saying a. I. Is not possible. Except, i kind of did nihilism which is just saying i dont want to tak think about the consequences of success it is too scary so i will find any argument that i can to avoid having to think about that. And i have a long list, i used to talk about a. I. And the risks and then here are the arguments why we should ignore the risks. And after i got to 28 arguments kinda like the impeachment, 28 reasons why you cannot impeach donald trump. I just gave up because it was taking too much time and i did not want to take up too much time today. You get the usual there is no reason to worry we can switch it off. And thats the last one, and i will never happen and we can switch it off. There are other ones that i wont mention because are too embarrassing. When we get to what the machines might do to us lets focus on will we get there . If you say this is an amazing and we missed three decades of a. I. Recession and nothing happening in our in a period of amazing progress and they say there will not happen. Nonetheless were out of point with the massive limitations of learning in the models and we can all see this promise theres a huge wolf to get from here to there. And you say itll take big consensual breakers to get to that. Is this the point we dont even know what they are doing. What can give you the confidence that thinks we will move these, why do you think that will happen . I tell you the breakthroughs that i think we need. Youre right after we make all the breakthroughs we might find is still not intelligent. We may not even be sure why. But theres clear places where we can say we dont know how to do this but if we did, that would be a big step forward. That already has been in dozens of breakthroughs over the history of a. I. Actually even going back much further. You could say aristotle was doing a. I. Even though he did not have a computer or electricity to do a. I. But he was thinking about the mechanical process of human thought decisionmaking, planning and so one and he described in my textbook we have agreed text which describes a simple algorithm that he talks about and how you can reach a decision about what to do. So the idea has been there and steps have been taken including the development which started in aging greece and ancient india and revived itself in the mid19th century and logic is overlooked these days by the community but it is the mathematics of things. The world has things in it. So if you want to have systems of intelligent in a world that contains things young, mathematic that incorporates things as third class citizens and logic is mathematics. Whatever shape is super intelligent system eventually takes will incorporate in some form logical reasoning and the kind of expected languages that go along with it. So let me give a couple of examples that are clearly needed breakthroughs, one is the ability to extract complex content from natural language text. So imagine being able to read a physics book and then use that knowledge to design a better telescope. That at the moment is not even close to being feasible but there are people working on being able to read physics books and pass exams. The sad thing is, it turns out most exams that we give students especially multiplechoice can be passed with no understanding whatsoever of the content. [laughter] so my friend who is a Japanese Research has been Building Software to passengers exam which is like getting into harvard or mit or maybe berkeley. And her program is now up there around the passing mark to get into the university of tokyo and it does not understand anything about anything. Its just going to hold a lot of tricks on how to do well on the exam questions. This is a perennial problem that the media often overlooks and they have a big headline, a. I. System gets into the university of tokyo but not the underlying truth that it still does not understand. So being able to understand a book and extract complex content from it and do design and with the content would be a big step forward. I think there is a little problem of imagination failure when we think about a. I. Systems because we think maybe if we tried really hard to get to be as smart as us but if a machine can read a physics book and do that then that same morning it will Read Everything the human race has ever written. To do that it does not even need power and we have ready half. They will not be like humans in any way shape or form. This is an important thing to understand, obviously we far exceed human capabilities and arithmetic and go in video games and so on. But these are much broader corridors of capability and when we reach human levels text understanding then immediately they blow by human beings and their ability to observe knowledge. That gives them access to everything that we know in every language at any time in history. Another important thing is the ability to make plans successfully in the real world. W spin. If you look at a very impressive achievement, the program that beats the human champion and sometimes when it is thinking about what move to make it second at 50 or 100 and into the future which is superhuman. Human beings dont have the memory capacity to remember that many moves. But if you take the same program and applied it to a real embodied robot that actually has to get around in the world and pick up the kids from school, leave the table for dinner, perhaps we landscape the garden, 50 100 moves get to one tenth of a second into the future in the physical world with a physical robot. It simply does not help at all. You might think about it being superhuman and looking into the future but is completely useless. When you take it from the go board and put it into a real robot. Humans manage to make plans at the millisecond timescale, so your brain preload and generates, preload then downloads into the muscles a normal sleigh complex motor control plan that allow you to speak and thats thousands of motor control commands sent to your tongue and lips and vocal cords of mouth and everything. And your brain has special structures to store the long sequence of instruction and spit them out a high speed so your body can function. But also as i was talking about his daughters decision to do a phd she just finished at biology and berkeley, it took six years. We make decisions on the timescale and im going to do a phd at berkeley, six years as a trillion motor control command. We operate at every scale between the decade to the new Second Period we do completely seamlessly. Somehow we always have motor control commands ready to go and we dont freeze in the middle of doing something and wait for 72 minutes for the motor control to be computed and then moving. So we always have motor control commands ready to go but we have stuff ready go the minute, hour, the month, the year. Its all seamless and the capacity comes from the civilization which over the millennia has accumulated higher and higher level abstract action that we learned about through language and culture and that allows us to make these plans. But that ability goes to construct the levels of obstruction and manage our activities over long time scales and is not something we know how to do an a. I. That would be the one big breakthrough that would allow machines to function effectively in the real world. There are dozens of groups working on it and there is progress toward the solution and some results we see recently in games like starcraft illustrate this because were go is a new hundred game, these are 20000 or 100,000 new games. Yet the a. I. Is playing a superhuman level. Lets leap ahead. These problems are being tackled and lets say we get to that point with human intelligence. This is heaven if you say one point in your book it took 119 years for gdp per capita in the world on a tenfold. We can do this with the technology at that point, we could do this in one generation or however, long to roll this out. So nonetheless, what could go wrong and what do you think about and what could go wrong. I think the interesting point is not the technology its how we designed at a fundamental level like you see most concerned about. We could talk more about that. Making i think the economist put it this way, introducing a second species onto the earth what could possibly go wrong. [laughter] so, the fact if you put it that way and said clearly intelligence is what is this power over the world so we make things more intelligent and more powerful than us, how will we have power over more powerful entities forever. When you put it like that its a good point. We should think about that. So thats what i try to do. The first thing to think about is why things go wrong. People have known this is a problem, alan said we would have to expect the machine to take control. He was completely matteroffact and resigned to the future. So its not a new thing the elon musk invented and i dont think anyone would say he is not an expert to have an opinion about a. I. Or the design. Same with marvin the cofounder of the field itself and other people. So it does not give you a choice. If the answer is we lose and the machines will take control on the han and of the human error, theres only one choice than to say we better stop doing a. I. That choice he referred to similasamuel butlers novel, the choice of a Science Fiction novel about a society developing sophisticated machines and decides they dont want to be taken over, they want to have control of their world taken by the machines so they banned machines, they destroyed all the machines in a terrible war before with the pro machine and answer machine. And now machines only exist in museums. But that is completely infeasible for the reason that rich mentioned. If we have super intelligent a. I. And use it well, that tenfold increase in gdp is conservative and it means giving everyone access to the same level of technology and quality of life that we have in berkeley. Not scifi or eternal life or light travel. That tenfold increase in gdp is bringing everyone up to a decent standard of living and worth about 10 20 quadrillion. That is creating the momentum and saying we will be on a. I. , its clea completely infeasible. Not to mention the and i unlike Nuclear Energy or crystal babies, a. I. Perceives by people writing formulas on whiteboards. You cannot ban the writing of formulas on whiteboards. So it is really hard to do much about it. We have to say what can go wrong, what is making better a. I. A bad thing. The reason is, the way we design the a. I. Technology from the beginning has a property that the smarter you make a. I. Systems the worse it is for humanity. Why . Because the way we build a. I. Systems and always have is essentially a copy of how we thought about human intelligen intelligence. Human intelligence is the capability to take action that you can expect will achieve your objective. This is the economic philosophical notion of the rational agent. That is how weve always built a. I. We build machines that receive an objective from us and take action that they can expect will achieve that objective. The problem is, as weve always known for thousands of years we are unable to specify objectives completely incorrectly. This is the fundamental problem and this is why the third wish that you give to the genie is always please undo the first two wishes because i ruined everything. But we may not get a third wish. If you create a system or intelligent more powerful than human beings and give it an incorrectly specified objective, it will achieve that objective and you will basically create a match between us and the machine and we lose the chest match. And the downside of losing that is bad. But the fundamental design era that we made very early on in the field, not just a. I. , control theory, economics, operation research, statistics all operate on the principle that we specify in objective and some machinery will optimize it. So corporations are already destroying the world and we dont need to wait to see how super intelligent a. I. Messes up. You can see it happening already. Corporations are making machine and maximize incorrectly objectives when theyre making a mess of the world. Were powerless to stop it. They have outlawed us. And they have been doing this for 50 years and thats were unable to fix our climate problem despite the fact that we even know what the solutions are. So to sum up, we have to design a. I. Systems a different way. If we are going to be able to live with her own creation successfully. A different way for all the organizations in many ways because we havent had anything this powerful and thats the last thing that we want. Sometimes corporations took us for our word and we set them up to maximize shareholder returns and thats what they did. And that is a problem because economist sometimes you can fix it by taxes or finder regulations but sometimes a social media messing up democracy and society you cannot, theres no way to tax on the social media platform. That is an early example where the algorithms are simple algorithms that manipulate human beings to make them more predictable sources of revenue. That is all they care about but because the operating on platforms and interacting with everyone for hours every day and superpowerful force in their objective of maximizing is another misspecified objective that we keep messing up with. We have plenty of time for questions. Before we do we should not hold back the punchline from your book which there is an answer. I guess we can do this the answer is in the first chapter in the narrative in the book. I dont want to leave everyone that emma dooms hair predicting the end of the world, we have enough of those books already. I cannot help being optimistic because i think every problem has a solution if it does not have a solution then its a fact of life and not a problem. So im proposing a way of thinking about a. I. That is different in the following way. If were unable to specify objectives completely and correctly for what we dont want our machines to do. Then it follows that the machine should not assume and knows what the objective is. All the a. I. Systems in every chapter is based on the assumption that the machine has the correct objective. That cannot be the case in real life. So we need machines that know that they dont know what is the true objective. The true objective the satisfaction of human preferences of the future. What each want the future to be like and what we dont want it to be like. Thats what the machine should help us with what it knows, what it does not know what the preferences are. This is a machine that in some ways were quite familiar with. How many people have been to a restaurant, wed go to a restaurant and does the restaurant wan know what you wao eat . Not usually unless you go there a lot. My japanese place across the road just bringing my lunch, they dont ask what i want. Generally speaking they have a menu because that way they can learn what you want. They know what you dont know what you want and they have a process, protocol to find out more about what you want. Theyre not finding out incomplete detail exactly how many grains of rice you want on your plate and exactly where you want to grow marks on your burger. Theyre getting a very rough sense, if there 60 items on the menu thats only four bits for your main course. But thats a protocol where the restaurant is like the a. I. System and it knows you dont know what you want to have a protocol to learn enough that it can make you happy. Thats a general idea except this will be much more radical. This will be not just what you want for dinner but for the whole future and what everyone on or wants for the whole future. We can show the two Important Properties of the systems. Number one, they will not mess with parts of the world whose value they do not know about. So, in the book and i used the example, suppose you have a robot that is supposed to be looking after your kids because your late home from work and supposed to cook dinner but theres nothing in the fridge. So what does it do . It looks around the house and spots a cat and calculates nutritional value of the cat and cooks the count for dinner. Because it does not know about the sentimental value of the cat. With systems that know that they dont know the value of everything. They would say, the cat may have the value of being alive that i dont know about so cooking the cat not be an option. At least it would ask permission and call me up on my cell phone and say is it okay if we cook the cat for dinner and i would say no. Is it okay if we turn the ocean into acid to reduce the carbon level and that mr. No dont do that. Thats point number one. You get invasive behavior. They can still do things as long as it understands preferences in a particular direction i want a cup of coffee, if it can get that without missing the rest of the world is happy to do it. The second point it will allow itself to be switched off and this is like the one plus one equals two of phase a. I. You cannot switch off. Why will it allow itself to be such off. Because it does not want to do whatever it would be to cause us to want to switch it off. So by allowing itself to be switched off it avoids the consequences and with those are does not know it does not know why im angry or why i want to switch off but it wants to prevent whatever it is from happening so it lets me search it off. This is a mathematical theorem, we can prove that as long as the machine is unfit with human preferences, it can always be shut off. As the uncertainty goes away then the safety goes away. So machines that believe they have complete knowledge and, they will not allow themselves to be shut off because that would allow them to achieve the objective. That the core of the solution. A very different kind of system and it requires rebuilding all of the antitechnology that we have because as i said all of the technologies based on incorrect assumptions and we havent noticed because the antisystem has been stupid and constrained to the lab. Constrained to the lab is going away in the real world messing things up. In the stupid is also going away and we have to solve the problem and rebuild the technology from the foundation up before the system gets too powerful and too intelligent. I have one more thing, lets say the machines do not kill us and they give us what we want. Then we have to look at what we want and not just individually. I have annoyed you what i want but in the aggregate. This is going to be a phenomenal problem for humanity and you come to the idea that the image of the movie woolly which im sure weve seen he manages sitting back being fed by robots and its a kind of and into the world. But how on earth will we will look forward to what the machines will give us what we want. I think this is the problem that i dont have a good solution for. Its not technological, its a social cultural problem of how do we maintain the vitality when in fact we no longer need to do most of what constitutes a civilization. Lets think about u education. Why do we educate. If we dont the civilization would collapse because the next generation will not be able to run it. So human cultures and animal species have figured this out they will have to pass on to the next generation otherwise and you added up over history, 1 trillion of effort have gone into just passing civilization onto the next generation. Because we have no choice, we can put on paper but the paper will not run the world. It has to get into the brains of the next generation but what happens when that is not true. What happens when instead of going through a long painful process of educating humans we could put the knowledge in the machine and they take care of it for us. This is a story that ian wrote and if you want one take away, if you cant bring yourself to buy the book you can download because is no longer a copyrig copyright. The machine stopped in 1909 but in the story everyone is looked after by machines 24 7 and we spend most of her time on the internet doing videoconferencing with ipads and listening to lectures or giving lectures to tell her. We are all a little bit of east and we dont like facetoface contact so times are like today but written in 1909. Of course the problem is nobody knows how to run the machine anymore. So we turned over the management of our own civilization to the machine and its a modern version of the story. What we need to do. Im reminded of the culture of the spartans. Sparta took a very serious cultural attitude to the survival of the city state. The typical life in those days seem to be every couple of years youre invaded by a neighboring in civilization, city state or whatever and they would hold off the women and kill the men. But sparta decided it needed to have a very serious Civil Defense capabilities. So education i was reading another book which is called World Without working he described as 20 years of pe classes. In order to prepare the citizens both male and female to fight. So it was a military boot camp that went on before you can walk until you are old enough to carry weapons and thats how they fight. It is a cultural decision to create and that is what was bound in the culture. So im not recommending we do that exactly but some notion of agency and acknowledge capability has to become different economic than the way it is now but a cultural necessity that youre not a valuable human being and i do not want to date you unless you know a lot and capable of skinning a rabbit and catching your own fish and fixing the court in this that and the other. So its a cultural change and i think i know be a matter of your own selfesteem. You dont feel like a whole human unless youre capable of doing these things and not being depend on the machine to help you. I cannot see any other kind of solution for the problems. The machine will tell us basically as you maam done with their children, it is time for you to tie your own shoelaces. But your children say no, no, we have to leave for school i can do it. Ill do it tomorrow. So thats what the human race will do we will say will get around to the agency stuff tomorrow but for now machines have to help us do everything. Thats a slippery slope and pretty dangerous. We have to work against the slope. I think this is a great point to leave this with educational institutions where maybe they dont need to learn anymore but lets open it up to questions and we will pass around the microphones, we have another one here and then go be constrained buy the book ask anything on your mind. [inaudible question] the objective should be unknown. A lot of way of thinking about what you said is like the objective is satisfying human preferences and human preferences are unknown. We should satisfy unexpected value of human oppressiv prefer

© 2024 Vimarsana

comparemela.com © 2020. All Rights Reserved.