An irrational selfish passion. Afterwards or saturday at 10 00 p. M. And sundays at 9 00 p. M. Eastern from book tv on cspan2. All previous afterwards are available as podcasts and to watch online at booktv. Org. A good evening everyone. Welcome to borders books and thank you for so porting our bookstore. Before we begin cspan is filming so make sure all films phones on silent. I want to let you know about a couple of other events. On tuesday, Stephen Kinzer is going to present the history of cia. We also have events last week and tickets are available from mondays conversation between celeste and and for the talk on wednesday. Tonight we welcome gary marcus, author of rebooting ai which argues that a computer being a human in jeopardy does not signal we were on the doorstep of fully autonomous cars. Taking inspiration from the human the book explains if we need to advance to the next level and suggest if we are wise along the way we will not need to worry about a future machine overloads. Finally a book that tells us about what ai is, what it is not what it could become if we are ambitious enough. An elusive been deeply informed account. Gary marcus is the founder and ceo of robust see a in his ceo geometrics intelligence. He is published in the journal and perhaps the youngest Professor Emeritus at a why you. [applause] this is not what we wanted to see but well see if it will go. This is not good okay. Maybe it will be all right. Weve had technical difficulties. Im here to talk about this new book, rebooting ai and some may have seen and not bad i had this weekend in the New York Times called how to build Artificial Intelligence we can trust. We should all be worried about that question because people are building Artificial Intelligence they dont think we can trust. Artificial intelligence has a trust problem. We are relying on a and i more more but it hasnt yet our confidence. We also suggest there is a hype problem. So a lot of a eyes overhyped these days often by people who are prominent in the field. And her angst, one of the owners of deep he said the typical person can do a mental task with less than one second of thought. We can probably automate a using ai now or in the future. Thats a profound claim. Anything you we can do in his second we can get ai to do. If it were true ai would be on the verge of changing altogether. It may be you true someday but im going to persuade you its not remotely true now. The trust problem is this. We have things like Driverless Cars that people think they can trust yet they should not actually trust and sometimes they die in the process. This is a picture from a few weeks ago in which a tesla crashed into a stopped emergency vehicle. That has happened five times in the last five years. A tesla on auto pilot has crashed into a vehicle on the side of the road. Heres another problem, im working in the robot industry. This robot is a security robot and committed suicide by walking into a puddle. So, you have entering same machines can do anything a person can do for the person in a second and can look in the pommel and say maybe i should not go in there and the robots cant. We have other kinds of problems like bias. So, you can do a google image search for the word professor and you get back Something Like this where almost all of the professors are white males even though the statistics in the United States are only 40 of professors are white males. Around the world is much lower than that. You have systems taken data but they dont know if the data is good and theyre just reflecting it back out on that is perpetuating cultural stereotypes. The underlying problem with Artificial Intelligence is the techniques people are using are too brittle. So, everybody is excited about deep learning. Its good for a few things. Object recognition, you can get deep learning to recognize this is a bottle and a microphone and you can get it to recognize my face and distinguish between uncle teds face. Deep learning can help some with radiology but it turns out all of the things that it is good at falling to one category of human thought or human intelligence there something you have to look at and identify things that like the same or sound the same. But that does not mean that one technique is useful for everything. I wrote a critique of deep learning a year and half ago you can find online calling deep learning and critical appraisal. An wired row, deep summary of it. It says they are downsized to deep learning so even though everybody is excited its not perfect. First, a real counterpart to injure angst claims. If you are running a business and wanted to use aiu would need to know what can ai do for you. Or if youre thinking of ai ethics wondering what machines can do soon i think its important to realize there are limits on the current system. If the typical person can do a mental task with less than one second of thought and we can gather an enormous amount of data directly relevant we have a fighting chance to get ai to work without so long as the test data the things that we work on are different than the things we taught the system on the system doesnt change much over time. The problem youre trying to solve doesnt change much over time. This is a recipe for game for this is what ai is good with his fundamentally things like games. Alpha go is the best player in the world. The system hasnt changed. The domain and game hasnt changed in 2500 years. We have a perfectly fixed set of rules and you can gather the data for free. You can have the computer place over different versions of itself which is what deep minded. Or you can keep playing and keep gathering more data. Compare that to a robot that does eldercare. You dont want a robot that does eldercare to collect data through trial and at work some of the time and outwork others eldercare robot works 95 of the time putting grandpa into bed and drop some 5 of the time youre looking at lawsuits in bankruptcy. Thats not going to fly for the ai that would drive in eldercare robot when it works theres something called a Neural Network and it is taking big deity in making statistical approximation so you label pictures of tiger woods, and a bunch of pictures of golf balls and Angelina Jolie and you show new picture that is in two different and it correctly identifies this as tiger woods and not Angelina Jolie. This is the sweet spot of deep learning. People got excited when it started getting popular. Wired magazine had an article saying deep weve already seen an example of a robot thats not smart. This has been around for several years but not delivered on. There are things that it evens and perception. On the right are some training examples. You change the system these things are elephants. If you show something with elephants on the right youd say wow it knows what an elephant is. If you showed a picture on the left the weight responses it says person. We mistake the silhouette of the elephant for a person and not able to do in the trunk of his civilians and this is what you are because extrapolation in deep learning cant do this. So we are trusting deep learning every day its getting used in systems that make judgments about whether people should stay in jail or whether they should get particular jobs and so forth. Its quite limited. Heres another example, making the same. Of actual cases. This is with great confidence its a snow plow. The system cares about things like the texture and the road and no idea about the road of a school bus or what theyre for. His fundamentally mindless this thing at the right was made by some people at mit. If you are deep learning system you would say its an espresso because theres from there. It picks up on the texture of the foam and says its espresso. He doesnt understand its a baseball. Another example is you show a banana and he put the sticker in front of the banana on its a psychedelic toaster and because theres more color variation in the sticker the deep learning system calls for the toaster. Well its a banana with the sticker in front is too complicated. All you can do is say which category something belongs to thats all deep learning dogs. If youre not worried this is starting to control society youre not paying attention one second here to look at my notes. So, i was next going to show you a picture with the parking sign with stickers on it. It would be better if i can show you an actual picture but presenting slides over the web is not going to work. A parking sign was stickers on it and the deep learning system calls that a refrigerator filled with food interests. And notice colors and textures but not understand whats going on. Then a picture of a dog thats doing a bench press. Something has gone wrong. Thank you for that. I would need a mac laptop and i think i just cannot do it fast. I dont think theyre going to be willing to edit it. Just go on. So a picture a dog with a barbell and its lifting a barbell. The deep learning system can tell you that there is a barbell there in a dog but it cant tell you thats weird how did he get so ripped that it could lift the barbell . Current ai is even more out of its depth when it comes to reading. It will read you a short story that Laura Ingalls wilder wrote. Its about a 9yearold boy who finds a wallet full of money dropped on the street and his father guesses the wallet might belong to someone named mr. Thompson. And he finds mr. Thompson. Here is the thing that wilder wrote. Manzo turns to administered you lose a pocketbook . Mr. Thompson jumps and slaps his hand to the park and said, yes, i have. 1500 in it too, what you know about it . He said is this it and he says yes. He opens it and counts the money and all the bills twice and then breathes a sigh of relief and says the will that boy didnt steal any of it. When you listen to the story you form a mental image of it. Might be a bit of it or not. But you know you and for a lot of things like did the boy had his stolen or where the money might be you understand why he has reached in his pocket looking for the wallet. He knows wallace occupy physical space and if your wallet is in the pocketbook or your wallet is in the pocket you will recognize the. You know these things and can make inferences about things like how everyday objects work and how people work. And so you can answer questions whats going on. There is no ai system yet that can do that. The closest thing we have is a system called gpt two. This is released by open ai. Some may have heard about it because its famous because elon musk founded it and the premise that they will give away their ai for free and thats what makes the story for free. Until they made this thing called gpt to. They said gpt two is so dangerous we cant give it away. This is is so dangerous we did not want the world to have it but people figured out how it worked to make copies of it and you can use on the internet. So my collaborator ernie davis and i said in the all manzo story into it. Remember, all manzo has found the wallet, the guy counted the money and now has you super happy. Defeat in the story and it continues. It took a lot of time, maybe an hour for him to get the money from the safe place he hit it. It makes no sense. It is perfectly grammatical but if you found his wallet what is it doing . The word safe place and wallet are correlated in a vast database but is different from the understanding little children do. The second half of the talk i will do without visuals is called looking for clues. The first clue we need to do as we develop ai further is to realize that perception which is what deep learning does well it is part of what intelligence does. You may know what a gardners theory of intelligence is. Theres verbal intelligence, musical intelligence and so forth. As a cognitive psychologist i would say theres things like common sense, planning, many different components. What we have is a form of intelligence just one of those. Its good at doing things to fit with that. Good at certain kinds of gameplaying. It doesnt mean it can do everything else. The way i think about this is deep learning is a great hammer and we have a lot of people looking around saying because i have a hammer everything must be a nail. And, some things work with that like go and so forth but there has been much less progress on language so there is been exponential progress and how well computers play games but zero progress in getting them to understand conversations. Thats because intelligence and self has different components, no Silver Bullet will solve it. The second thing i wanted to say is theres no substitute for common sense. We need to build common sense into our machines. The picture i wanted to show you is of a robot on a tree with a chainsaw and its cutting down the wrong side if you can picture that and its about to fall down, now, this would be very bad we would not want to solve it with reinforcement learning, you are not like a fleet of a hundred thousand robots that would be bad as they said in ghostbusters, then i was going to show you this really cool picture of the yarn theater which is like a little bowl of yarn and some string that comes out of a hole and as soon as i describe it to you you have enough common sense about how physics works and then i was going to show you a picture of an ugly one and say you can recognize this you from the looks totally different because you get the basic concept. Then i show you the picture of a room above the vacuum cleaner robot on that a picture of new tele and a dog doing its business you might say. Say the roomba doesnt know the difference between the two and then the coup pocket. This happened not once but roomba does that do not know the difference between new tele that they should clean up and spread the dog waste all the way through peoples houses the Jackson Pollock of Artificial Intelligence and common sense disaster then what i really wish i could show you the most is my daughter climbing through chairs sort of like the ones you have now. My daughter was four years old, youre sitting in chairs where there is a space between the bottom of the chair in the back of the chair. Now she did not do this what we call reinforcement learning which we i was never able to climb through the chair, little too big even if im in good shape and exercising a lot and for those who know that know is she watch the Television Show dukes of hazard and climb through window shed never seen that so she just invented for himself a goal and this is the effort essence of how the human children learn things like can i do this can i walk on the small ridge on the side of the road i have two children five and six and a half an all day long they make up games like what it was like this or can i do that and so she tried it and learned it essentially in one minute. She squeezed through the chair got a little stock still a little problem solving. This is different than collecting data with labels. I would suggest that if ai wants to move forward we need to take clues from kids on how to do these things. The next thing is going to do was quote elizabeth spunky who teaches at harvard on the street and she has made the argument if you are born knowing their objects and places then you can learn about particular objects but if you just about pixels and videos you cant do that. You need a starting. This is what people call a native hypothesis. There is a video of a baby i backs, nobody ever wants to think humans have anything in nates. Humans are are built with notions of space and causing causality and the argument is a i should do it but nobody has a problem thinking animals should do it so i show a baby ipaqs climbing down the side of the mountain a few hours after its born. Anybody who sees this should sees that there something built into the brain of the baby. There has to be an understanding of threedimensional geometry from the minute it comes out of the womb. Similarly, you must know something about physics and its own body. It does me to get calibrated and see how strong its legs are but as soon as is born it knows that the robots fail, a bunch of robots like Opening Doors for falling over. He said that i cannot show you this right now but you get the. Current robots are really quite ineffective in the real world. The video i was going to show her things that had all been simulated. It was a competition that darpa ran and everybody knew what the events were going to be. Theyre just going to have the robots open the door and turn dials and stuff like that. They had done them in computer simulation. When i got to the real world the robots field left and right. They could not deal with things like friction, wind and so forth. So, to sum up, i know a lot of people are worried about ai right now and read about robots taking over our jobs and killing us all. Theres a line in the book about about all that stuff would be in the 14th century. More and about highway fatalities when people would have been better off worrying about hygiene. What we should really be worried about is not some vast future scenario in which ai is much smarter than people and could do whatever it wants and i can talk about this, we should be talking about the limits of current ai and how are you seen it in things like jobs in jail sentences and so forth. So, on the topic of robot attack i suggest a few things. The first is to close the door. Robots right now cannot actually open doors. There is a competition now to teach them how to do that. If that doesnt work lock the doors. Theres not a competition yet to have them lock the doors and it will be seven or ten years before people work on doors where is gms and people have to pull in the not so just lock the door or put up a sticker i showed you. You will completely confuse the robot. Were talking in a foreign accent in a noisy room. The robots dont get any of this. The second thing i wanted to say is that deep learning is the better latter. It lets us climb to certain heights. Just because something is a better ladder doesnt necessarily mean it will get you to the moon. We have a helpful tool here but we have to discern as listeners, readers and so forth the difference between a little bit of ai in some magical form of ai that has not been invented yet. To close and then i would love questions. If we want to build machines as smart as people we need to start by studying small people, human children and how they are flexible enough to understand the world the way that ai is not able to do. Thank you very much. [applause] questions. I am a retired Orthopedic Surgeon and i got out just in time because theyre coming up now with robotic surgery switches prominent in the knee replacements. Have you got information about where that is is headed and how good it is et cetera . Well, the dream is that the robot can completely do the surgery itself. Right now most of that stuff is an extension to the surgeon. So like any other tool in order to get robots to really be able to be fullservice they need to understand the underlying biology of what they are working on. They need to understand the relation between the different body parts they are working with and our ability to do committed for the reason im talking about will be advances in that field but i would not expect when we send people to mars whenever that is that we would have a robot surgeon like you have in science fiction. We are nowhere near to that. , it will happen someday, there is no principled reason why we cant build such things and have machines better understanding but we dont have the tools right now to have them absorb the medical training. It reminds me of a famous experiment in Cognitive Development where a champion zine was trained and raised in a human environment and the question was, what it learn language and the answer was no. If you said the current robot to medical school it wouldnt learn diddly squat. Other questions. With the current limitations of aia applied to self driving cars . Yes. Self driving cars are really interesting test case. This seems like maybe he is logically possible to build the body empirically the problem you get is outlier cases and it follows directly from what i was saying. I think if youre training data, the things you teach the model in his system are are two different than what you see in the real world they dont work well. The case of the tow truck and the fire truck that the teslas keep running into is probably in part because mostly they are trained on ordinary data for their moving fast on the highway and you see something and it has not seen before and it doesnt understand how to respond so i dont know whether Driverless Cars are ultimately going to prove to be closer to Something Like chess or go but its going to be more like language was seems completely outside the range. But people have been working on it for 30 or 40 years, there is progress but its relatively slow progress and it looks a lot like lacrimal. People solve one problem and it causes another so that the first fatality from a driverless car was a tesla that ran underneath a semi trailer that took a left turn. Personally have the problem that it was outside the Training Center whatever it was an unusual case and second of all i have been told, i dont have proof of this but i have been told that what happened is that the tesla thought that the tractortrailer was a billboard in the system had been programmed to ignored billboards because of it did and it was their so often it would get rearended all the time. One problem was solved on another problem popped up. The game lacrimal. So what happened so far is that Driverless Cars are like whack a mole. People make a little bit of progress they dont solve the general problem into my mind we dont have general techniques to try to solve the problems. People are saying i will just use more data and they get a little bit better. We need to get a lot better. Right now the way mo carson need a human intervention about every 12000 miles. That sounds impressive. Humans only have a fatality every 3400 miles on average. If you want to get to human level you have a lot more work to do. Its just not clear the grinding out of same techniques are going to get us there. This is again the metaphor, having a better ladder will not get you to the moon. My question about Machine Learning and using it to do right now im an astronomer so we started to use it in science. The question is if youre just doing pattern recognition you dont learn anything. You think were making progress on having Machine Learning kinds of programs to be able to tell us other making decisions in enough detail is useful . There is interest in that. Right now, it may change but right now theres tension between techniques that are relatively efficient and techniques that produce interpretable results. Right now the best techniques for a lot of perceptual problems if i want to identify does this look like another asteroid deep learning is the best at that and it says far from interpretable as you can imagine. People are making incremental progress to make that patter but, theres a tradeoff you get better results than give up interpretation. Theres people worried about this problem, i have not seen a great solution to it. I dont think its insolvable in principle. But right now its a moment with the ratio between how good the systems work and how little we understand is extreme. Also we will have cases where somebody will die in somebody will have to tell the parent of a child the reason your child die is parameter 317 was a negative number one shouldve been positive. It will be completely meaningless. That is where we are right now. Other questions . Your thoughts on healthcare diagnostics and also just the fact that we can afford to have any misdiagnosis. I guess thats three different questions. The first is, i may forget one, the first is can you use this for medical diagnosis and the answer is yes better relates to the last which is how important is the misdiagnosis. The more important it is the less we can rely on the techniques. Its also the case of human doctors are not completely reliable. The advantage machines half or something, radiology in particular theyre good at pattern. At least in careful laboratory conditions. Nobody really has those words i know a working realworld system that does radiology in general. Theyre more like demonstrations that i can recognize this particular pattern. In principle deep learning has an advantage over people. It has a disadvantage and then it cant read the medical charts. Theres unstructured text which is doctors notes and stuff like that. Its just written in english rather than being a bitmapped picture of a chart. In hands, machines cant read that stuff at all or they can may be a little bit recognize keywords and stop but there really good radiologist is like a detective. Sorta like sherlock holmes. Theyre like well, i realize this asymmetry here that it shouldnt be and it relates to this accident at the prison had 20 years ago and tries to put together the pieces in order to have an interpretation or story about whats going on. Current techniques dont do that. Again, im not saying impossible but is not going to roll out next week. So, the first cases of ai really have an impact on medicine are going to be radiology, that you can do on a cell phone where you dont have a radiologist available. In countries where theres not enough doctors, the symptom systems may not be perfect but you can try to reduce the false alarms to some degree and try to get decent results where you couldnt get results at all. We will start to see that. Pathology will take longer because we dont have the data radiologist have been digital for a while. Then things like if you ever watch the television, house where youre trying to put together some complex diagnosis of a rare disease and systems are not going to be able to do that for a long time. Ibm made an attempt with that with watson but it was a very good. It missed Heart Disease when it was obvious to a first year medical student. And there it goes back to the difference between having a lot of data and having understanding. Youre just doing correlation but youre not understanding underlying medicine then you cant really do that much. We just dont have the tools yet to do really high quality medical diagnosis, thats ways of. Thank you for coming, im working now as a data analyst, Data Scientist and part of what i am sort of the team of the organization, part of what im working on his scoping what our small Industry Task that automation using Machine Learning would be helpful for and harder tasks like forecasting or doing solving wider problems that are like you were saying not solvable right now with current methods. How would you im eyes in ways to explain or get the idea of cross between bounded versus unbounded problems. I think the fundamental difference is some problems are a closed world. They are limited. The possibilities are limited. The more limited the world is the more current techniques can handle them. Some are openended where they could involve arbitrary knowledge or unusual cases. So driving is interesting because in some ways its closed and like you only drive on the roads over talking about ordinary driving in circumstances but its openended because there could be a Police Officer with a handlettered sign saying this bridge is out and so, there are so many possibilities in that way its open ended. So while you end up finding is that the Driverless Cars work well in the stuff that is closed, theres a lot of conventional data forward and they work very poorly when theyre forced to go outside and formally their comfort zone. So, the sims systems have a comfort zone of being a little bit about it and they go outside it doesnt work that well. You made it a. About how learning about data in humans dont do you think that humans have an advantage about evolution about the problem. A billion years of evolution. The problem that maybe we are using data the wrong we are not using enough data . I dont see it that way. I see it that a billion years of evolution what they did was build the genome and a rough draft of the brain and if you look at the developmental biology is clear that brain is not a blank slate. Its carefully structured. We dont understand all of it but theres a number of experiments that illustrate it and you can do deprivation experiments where animals dont have any exposure to the environment. So, what evolution has done we shape a rough draft of the brain. Not a final brain and that is built to learn specific things about the world. You can think about ducklings moving for something to imprint on the moment they are born. Our brains are built to learn about people and objects and so forth. But what evolution has done as it gives us a good toolkit for assimilating the data that we get. You could say buy more more date and time could i get the same thing and may be, but we are not very good at replicating a billion years of evolution. That is a lot of trial and error that evolution did. We could try to just replicate that with enough cpu or gpu time, enough graduate students and so forth but there is another approach to engineering which you try to look to nature and how its all problems and try to take clues for the way in which nature solves the problem and thats fundamentally what im suggesting is that we should look at how biology in the form of human brains and other animals brains, manages to solve problems. Not because we want to build literal light, we dont need to build more people. I have two small people and they are great and great cotton and nicer. We want to build ai systems that take the best of what machines do will witches compute fast with the best of what people do which is to be flexible. And we can do things like solve problems that no human being can solve. So the 7000 papers published every day, no dr. Can read them all, its impossible for humans, right now machines cant read at all but if we built machines that could read and we could scale them the way we can Scale Computers than we could revolutionize medicine. But to do that we need to build in basic things like time and space and so forth so that the machines can then make sense of what they read other questions . Are used are you thinking about fixing the problem . Building these new modules what form they will take, are they going to be the same structure that deep learning currently uses are something completely different . The first thing i would say is we dont have the answers but we try to pinpoint the problems. We try to identify in different domains like space, time and causality where the current systems work and where they dont. The second thing i will say is the most fundamental thing is we need ways of representing knowledge and our learning system. There is a history of thing called expert systems that are good at representing knowledge. If this is true then do this other thing. Its likely such and such is happening. The knowledge looks a little bit like sentences in the english language. Then we have deep learning which is good at representing correlations between pixels, labels the very poor representing that knowledge. What we argues that we need a synthesis of that. Learning techniques that allow you to be responsive to data in ways that traditionally are techniques were not responsible for data. Learning is responsive to data that represents or works with abstract knowledge so that you can for example teaches something by saying i dont know the and apple is a kind of fruit and have an understand that and we have systems that can do a little tiny bit of this but we dont really have systems where we have any way of teaching something explicitly like wallets occupy physical space and that a wallet inside a pocket is going to feel different than a wallet not inside a pocket. We dont have a way of even telling a machine that right now. Im thinking if we dont take it the deep approach and dont learn all this from data and we tried to incorporate some of this knowledge then how do we play in a different game of whack a mole like were trying to impose some knowledge about time and tomorrow we are like well we need to have knowledge about space . I think we need to do all three. What i would say is there is a lot of knowledge that needs to be encoded, doesnt all have to be hand encoded. We can build learning systems, but there are some core domains and i borrowed this, there are some core domains that enable other things. If you have a framework for representing time you can represent things that happen in time. If you dont know that time exists and just see correlations between pixels is not going to give you that. One way to estimate things is to think about like the number of words in the english language that a typical native speaker nose. Is Something Like 50000 if they have a big vocabulary. And maybe there is ten pieces are 100 pieces of common sense that goes with each of those words then youre talking about millions of pieces of knowledge but not trillions of pieces of knowledge. It would be a lot of work to encode them all. Maybe we can try to do it in a different way but its not a non founded task. As one people dont want to do because is so much fun to play around with deep learning and get good approximations, not perfect answers so nobody has the appetite to do it right now but it could be there is no way to get there otherwise. And, i mean, thats my view and that was counsel you in its a long tradition of native is saying that the way you get into the game is that you have something that allows you to bootstrap the rest. I dont think its a whack a mole problem i think its a scoping problem that we need to pick the right core domain and if you look at the Cognitive Development literature is true for babies. Then they develop more knowledge. I think we could at least start with the things that they have identified, work on those and see where we are. Other questions. Thank you all very much. [applause] i think well have a book signing if anyone. Thank you for coming out. We have copies of the book had the front counter. Book tv recently spoke to republican congressman Steve Scalise of louisiana about his recovery from gunshot wounds he suffered during a congressional baseball game practice in 2017. Pretty powerful weapon obviously, this 762 caliber bullet that hit me you could take and bear down with that and when i saw the size of the bullet later they showed me, i said how my a live. It really does make you wonder. But again, a lot of miracles that day. And that is one i detail in the book. Whatever your faith is, i have a strong faith and it helped get me through it. But, i chronicles some very specific things that happen on the ball field that even if you dont have the same faith you might say the first one or two is a coincidence but by the time you get to the fifth and sixth the one clearly there is a larger presence on that ballfield and guide get did take her was. One of those is that brad rinsed her up usually did not stay till the end of practice. He normally has a meeting around 8 00 oclock. So he leaves closer to seven to go shower and get ready and go to the office. That morning his meeting was canceled. He decided to stay for extra batting practice. He was in the batting cage on the first baseline. The shooter was hiding behind the third base dugout. Brad was out of the fire but he could see everything happening. Again, and skills took over and he knew as soon as the shooter went down he had to come check on me to see what had happened and if he could do something to help me. Again, i would not be here if you was not there that day but most days he would not have been there because his schedule would have brought him somewhere else and one of the many miracles. Steves glazes book is called back in the game. You can watch the rest of his interview by visiting our website at books he read out of work. Type his name or the title of the book in the search box at the top of the page. Prime time starts now here in book tv. First, you will hear from mary gray, on workforce issues faced by Tech Companies like amazon, google, and new bird. Then, andrew pollack, the father of a student killed in the School Shooting in parkland, florida offers his thoughts on School Safety and guns. Thats followed by our Author Interview program, afterwards with paul to the who reports on the cost of college education, at 10 00 p. M. Gordon chain weighing soon on the dangers that south korea may face from north korea