All right. Good evening everyone. Thank you for supporting your local independent and Employee Owned bookstore. Before we begin, cspan is filming this. Make sure the phones are on silence. Tomorrow night the author will be reading from his new novel. On tuesday poisener in chiefs author will be joining us. We will also have a talk on wednesday. We welcome the author of rebooting ai which argues a computer being a human in jeopardy doesnt signal that we are on the doorstep of fully Autonomous Cars or super intelligence machines. Taking inspiration from the human minds, the book explains if we need to advance Artificial Intelligence to the next level, if we are wise along the way, we wont need to worry about a future of machine overlords. Finally a book what tells us what ai is, what it is not and what it could become if were ambitious and creative enough. Its also been called an informed account. The founder and ceo of robust ai and founder and ceo of intelligence. Hes published in journals including science in nature. Hes the author of guitar zero and rebooting ai. Thank you. Thank you very much. [applause] uhoh, this is not good. Um, okay, maybe it will be all right. We had some technical difficulties here. Im here to talk about this new book rebooting ai. Some of you might have seen an oped i had this weekend in the New York Times called how to build Artificial Intelligence we can trust. I think we should be worried about that question because are building a lot of Artificial Intelligence they dont think we can trust. Artificial intelligence has a trust problem. We are relying on ai more and more but it hasnt yet earned our confidence. We also suggested i also want to suggest theres a height problem. So a lot of ai to suggest theres a hype problem. One of the leaders of something called deep learning which is a major approach to ai these days. He says if a typical person can do a mental task with less than one second a thought, we can probably automate it using ai either now or in the future. Thats a profound claim. If it were true, then the world would be on the verge of changing all together. It may be true some day, 20, 50, to 100 years from now, but it is not true now. We have Driverless Cars that we think we can trust but we cant actually trust. This is a picture of a few years ago where a tesla crashed into a stopped emergency vehicle. Thats happened five times in the last year that a tesla on autopilot has crashed into a vehicle by the side of the road. A systematic problem. Heres another example. I am working in the robot industry. I hope this doesnt happen to my robots. A security robot basically committed suicide by walking into a little puddle. Youve got andrew yang saying machines can do anything a person can do in a second. A person in a second can look in the puddle and say maybe i shouldnt go in there and the robots cant. We have other kinds of problems too like bias. A lot of people have been talking about that lately. You can do a google image search for the word professor and you get back Something Like this where almost all the professors are white males even though the statistics are in the United States that only 40 of the professors are white males and if you look around the world, it would be lower than. That you have systems that are taking a lot of data but they dont know if it is any good and they are reflecting it back out and perpetuating cultural stereotips. The underlying problem with Artificial Intelligence is the techniques people are using are simply too brittle. Everybody is exciting about something called deep learning. It is good for many things. You can get deep learning to recognize that this is a bottle or maybe this is a microphone. You can get it to recognize my face and maybe distinguish from my uncle teds face for example. Deep learning can help some with radiology, but it turns out that all of the things that its good at fall into one category of human thought or human intelligence. The category that they fall into is they are all things we call perce percentual perceptual classification. Things that look the same or sound the same. That doesnt mean that one technique is useful for everything. I wrote a critique on deep learning, you can find it online for free. The summary of it says deep learning is greedy, brittle, opaque and shallow. There are down sides to it. Even though everybody is excited about it. That doesnt mean it is perfect. I will give you some examples. I will give you a counterpart to andrew yangs claims. You were running a business and wanted to use ai, you need to know what it can do for you and what it cannot do for you. If you are thinking of ai ethics and wondering what machines might be able to do soon or what they may not be able to do soon, it is important to realize there are limits on the current sys m system. If a typical person can do a mental task with less than one second of thought and gather a lot of data thats directly relevant we have a fighting chance to get the ai to work with that, so long the test data, the things we make the system to work on are different than the things are not too terribly different than the things we taught the system on and the system doesnt change much over time. The problem you are trying to solve doesnt change much over time. This is a recipe for games. This says that what ai is good at is fundamental things like games. So alpha go is the best go player in the world, better than any human. It fits in what the systems are good at. The game hasnt changed in 2500 years, a perfectly fixed set of rules and you can gather as much data as you like for free, or almost for free. You can have the computer play itself or different versions of itself, which is what deep mind did in order to make the worlds best go player, and he can keep playing itself and keep gathering more data. Compare that lets say to a robot that does elder care. You dont want a robot that does elder care to collect an infinite amount of data through trial and error and work some of the time and not work others. If your elder care robot works 95 of the time putting grandpa into bed and drops grandpa 5 of the time, you are looking a lawsuits and bankruptcy; right . Thats not going to fly for the ai that would drive an elder care robot. When it works, the way that deep learning works there is something called Neural Network that i depicted at the top. Its taking big data and making statistical proximations. Taking labelled data, label a bunch of pictures of tiger woods, golf balls, Angelina Jolie and show it a new picture of tiger woods that isnt too different than the old pictures and it correctly identifies that this is tiger woods and not Angelina Jolie. This is the sweet spot of deep learning. People have got excited about it when it first started getting popular. Wire magazine had an article saying deep learning will soon give us super smart robots. We have already seen an example of a robot thats not really all that smart. I will show you some more later. This promise has been around for several years, but its not been delivered on. There are a lot of things that deep learning does poorly even in perception. Then i will talk about Something Else which is reading. On the right are some training examples. You would teach the system, these things are elephants. If you showed another elephant that looked a lot like those on the right, the system would have no problem at all. It would say wow it knows what an elephant is. Suppose you show it the picture on the left. The way the deep learning system responds is it says person. It mistakes the silhouette of an elephant for a person. And its not able to do what you would be able to do which is first of all to recognize it as a silhouette and second of all say the trunk is really salient and it is probably an elephant. This is what you might call extrapolation or generalization and deep learning cant really do this. Were trusting deep learning every day more and more. It is getting used in systems that make judgments about whether people should stay in jail or whether they should get particular jobs and so forth. And its really quite limited. Heres another example. Kind of making the same point about unusual cases. So if you show it this picture of a school bus, on its side, in a snow bank, it says with great confidence, well thats a snowplow. The system cares about things like the texture of the road and the snow, has no idea the difference between a snowplow or school bus or what they are for. It is fundamentally mindless statistical summation and correlation, it doesnt know whats going on. This on the right was made by some people at mit, if you are a deep learning system you would say its an espresso because theres foam there. Not super visible because of the lighting. It picks up on the texture of the foam because espresso. It doesnt understand that it is a baseball. Another example is you show deep learning system a banana, and you put this sticker in front of the banana, which is a kind of psychedelic toaster, and because theres more color variation and so forth in the sticker, the deep learning system goes from calling the top one a banana to calling the bottom one a toaster. In fact, it doesnt have a way of doing what you would do which is to say Something Like well its a banana with a sticker in front of it. Thats too complicated. All it can do is say which category something belongs to. Thats all deep learning does essentially is identify categories. If youre not worried a this is starting to control if you are not worried that this is starting to control our society, youre not paying attention. Lets get the next slide. Maybe not. Going to have to go without the slides i think because of technical difficulties. I will continue, though. Theres just nothing we can do. All right. One second here, look at my notes. Okay. So i was next going to show you a picture of a parking sign with stickers on it. It would be better if i could show you the actual picture, but presenting slides over the web is not going to work, so parking sign with stickers on it, if you can imagine that. The deep learning system calls it a refrigerator, filled with a lot of food and drinks. It is completely off. Its noticed something about the colors and textures but it doesnt really understand whats going on. Then i was going to show you a picture of a dog thats doing a bench press with a barbell. Yes, something has gone wrong. [laughter] thank you for that. All right. Well just shut it down. [inaudible]. Yeah . I would need a mac laptop. I just couldnt do it fast. You need a mac laptop . Ive got one. I dont think they will be willing to add it okay. Just go on. You have a picture a dog with a barbell and its lifting the barbell. Deep learning system can tell you that theres a barbell there and a dog, but it cant tell you hey thats really weird. How did the dog get so ripped that it could lift the barbell . Deep learning system has no concept of things that its looking at. Current ai is even more out of its depth when it comes to reading. I will read you a short little story that Laura Ingles Wilder wrote. A 9yearold boy finds a wallet full of money thats dropped on the street. The father guesses that the wallet might belong to somebody named mr. Thompson. He finds mr. Thompson. Wilder wrote he turns to mr. Thompson did you lose a pocketbook . Mr. Thompson jumps, slaps his hand in his pocket and jumps. Yes. I have. 1500 in it. Is this it . Yes, that is it. He opens it and counts the money. He breathes a sigh of relief and says well, that darn boy didnt steal any of it. When you listen to that story, you form a mental image of it. It might be vivid or not so vivid, but you know you infer a lot of things like that the boy hasnt stolen any of the money or where the money might be. You understand why hes reached in his pocket looking for the wallet because you know that wallets occupy physical space and that if your wallet is in the pocket, that you will recognize it, and if you dont feel anything there, then it wont be there and so forth. You know all of these things. You can make a lot of inferences about things like how every day objects work and how people work. You can answer questions whats going on. Theres no ai system yet that can actually do that. The closest thing that we have is a system called gpt 2. This is released by open ai. Open ai is famous because elon musk founded it and it has the premise that they will give away all their ai for free. Thats what makes this story interesting. They gave away their ai for free until they made this thing called gpt 2. They said it is so dangerous that we cant give it away. It was this ai system that was so good at human language that they didnt want the world to have it. But people figured out how it worked and they made copies of it. Now you can use it on the internet. So my collaborator ernie davis, my co author and i fed in this story into it. Remember the boy has found the wallet, given it to the guy. The guy has counted the money. He now has hes super happy. What you do is you feed in the story and it continues it. What it said it took a lot of time, maybe hours this continues it for him to get the money from the safe place where he hid it. It makes no sense. It is perfectly grammatically. If he found his wallet, what is this safe place . The words safe place and wallet are correlated in some vast database. It is completely different than tunsing that children do different than the understanding that children do. The second half that i will talk about without visuals is called looking for clues. We need to realize that perception which is what deep learning does well is just part of what intelligence is. So some of you might especially in cambridge know a theory of multiple intelligences for example. Theres verbal intelligence, musical intelligence, and so forth. As a cognitive psychologist, i would also say there are things like common sense, planning, attention. There are many different components. What we have right now is a form of intelligence that is just one of those, and its good at doing things that fit with that. So it is good at doing perception. It is good at certain kinds of game playing. That doesnt mean it can do everything else. The way i think about this is the deep learning is a great hammer, and we have a lot of people looking around saying because i have a hammer, everything must be a nail. And some things actually work with that, like go and chess and so forth, but theres been much less progress on language. So theres been exponential progress in how well computers play games, but theres been zero progress in getting them to understand conversations. Thats because intelligence itself has many different components. No Silver Bullet is going to solve it. The second thing i wanted to say is that theres no substitute for common sense. We really need to build common sense into our machines. A picture i wanted to show you right now is of a robot on a tree with a chainsaw. And its cutting down the wrong side, if you can picture that. So its about to fall down. Now, this would be very bad. We wouldnt want to solve it with a popular technique called reinforcement learning where you have many many trials. You wouldnt want a fleet of 100,000 robots with 100,000 chainsaws making 100,000 mistakes. That would be bad as they said in ghost busters. Then i was going to show you a cool picture of something called the yarn feeder which is a little bowl with some yarn and a string that comes out of a hole. As soon as i describe it to you, you have enough common sense about how physics work and what i might want to do with the yarn feeder. I was going to show you a picture of an ugly one. You can recognize this one even though it looks totally different because you get the basic concept. Thats what common sense is about. I was going to show you a picture of roomba. You all know the vacuum cleaner robot. I was going to show you nutella and a dog doing its business, you might say, and say the roomba doesnt know the difference between the two. I was going to show you something thats happened not once but many times which is roombas that dont know the difference between nutella that they should clean up, and dog waste, spread the dog waste through peoples houses. Its an Artificial Intelligence common sense disaster. [laughter] then whey then what i wish i could show you the most my daughter climbing through chairs. When she was 4 years old, she was small enough to fit through the space between the bottom of the chair and the back of the chair. She didnt do it by imitation. I was never able to climb through the chair. Im a little bit too big, even if im in good shape and exercising a lot. She had never watched dukes of hazard and climbed through a window and get inside of a car. She had never seen that. She invented herself for a goal. This is the essence of how human children learn things. They set goals, can i do this . Can i walk on this small ridge on the side of the road . I have two children, 5 and 6 1 2. All day long they just make up games what if it were like this or can i do that . She tried this and she learned it essentially in one minute. Like she squeezed through the chair. She got a little stuck. She did a little problem solving. This is very different from collecting a lot of data with a lot of labels the way that deep learning is working right now. And i would suggest that if ai wants to move forward, that we need to take some clues from kids and how they do these things. The next thing i was going to do was to quote elizabeth spellke who teaches at harvard down the street. Shes made the argument if you are born knowing there are objects and sets and places and things like that, then you can learn about particular objects and sets and places, but if you just know about pixels and videos, you cant really do that. You need a starting point. This is what people called the nativist hypothesis. I like to show a video. People dont want to think that humans are built with notions and space of time and causality as its been argued, as im suggesting ai should do. Nobody has any problem thinking animals should do this. I show this video. They have to realize that theres something built into the brain that there has to be an understanding of three dimensional geometry from the minute that it is born. It must know something about physics and its own body. That doesnt mean it cant calibrate and figure out how strong its legs are and so forth but as soon as it is born, it knows that. The next video i was going to show you, you have to look at this online, robots fail. It shows a bunch of robots doing things like Opening Doors and falling ove