Transcripts For CSPAN2 Book Discussion On Our Final Invention 20140416

Card image cap



available also on netflix and i became interested in artificial intelligence and that is what i am here to talk to you about tonight. artificial intelligence, what it is, and what i think as far as a lot of researchers and makers think that is being developed in the wrong way. and i really believe this conversation is the most important conversation of our time. and so let's begin with this. what is artificial intelligence? well, it is the development of computer systems able to perform tasks that normally would wire human intelligence and visual perception and etc. it is the whole idea because by and large what we know much about, it is human intelligence and human intelligence is both a subject of study and the tool which we try to penetrate and it is the most inward looking of any of the sciences that involves technology, neuroscience, medical statistics and a lot more on top of programming and computer science. it makes us ponder what it is we are looking for when we have human cognition in machine. so what do humans do and what are you? what is intelligence? and there's a lot of different intelligence in this research and we have the ability to achieve goals and a variety of novel environments and to learn, and there's a lot packed into that definition. it's as intelligence is goal oriented and so if it's not doing something, it's not just plain intelligence. it also should be mobile although that is a point of contention, whether or not it needs a body. because if you cannot move around, there's no way of really testing it. to move around you need some sort of body and you must learn from experience and this is a really important one for us. those animals come with the ability, whatever we have, and we can learn new languages and etc. and nothing like the skill of human sense because of our intelligence. i have been a part of this for several decades and then i got bitten by the bug and i was working for the learning channel back when it was the learning channel. and as many assume about artificial intelligence, i interviewed the man who is my hero at the time. ray criswell is a pioneer of speech recognition technology and many other inventions that have been called the thomas edison of our time. he is the man that really coined the term singularity and so he is now the chief engineer at google in charge of their projects about the brain and most people think that reverse engineering of the brain is the fastest way to create artificial general intelligence, which is human level intelligence. and that maybe something that you want to look into. it is quite fascinating and i won't go into it at much depth. i also got to interview another hero of mine back then, rodney brooks, who is the foremost proboscis of our time in the company he founded is called irobot. his general-purpose robot is called baxter and this includes being able to learn and do things in your home or in factories and he imagines them working on farms. right now, there are vacuuming robots and they also make a lot of outfield robots as well and there is a debate going on right now that is a very important debate about whether or not battlefield robots should be autonomous or whether or not they should make the kill decision without the human input. and somehow joins with them and i will get to that later. ultimately, we will be taking to robots, they are small and they're going to help us go into a pure med that hasn't been asked orbit has a lot of rockfall saw the passages inside that we can't get through to comment that they will be executed eventually while we are there. the worst thing to do is to try to get a sense in what is the fastest way to the burial chamber and what is the overall layout look like. and honestly don't let the title of my book mislead you. but i really like robots and we are talking about the time that is coming when we were share the time when machines are smarter than we are. books predict it will help us solve every medical problem facing us including general mortality. and after a while i interviewed arthur clarke. before he created and became a science fiction legend coming out a background in mathematics and physics and then he ran on to win every war for wards in science fiction. and he said intelligent machines will dominate us and to paraphrase what he said, it was something like this. and he said that it is not because we're the fastest or most intelligent creature as humans, the only sure the world with greater smarter than ourselves, they will steer the future. that idea affected me and this was back in 1990. i started interviewing ai makers and robots as shortly after that. and to work out this idea as well. i decided to write a book. when i spoke with article authors on artificial intelligence and they agreed with the premises, most of the decisions will be made by machines and hundred years or so i began to ask follow-up questions. will that transition be firmly? will that be a handler or a takeover? will we change ourselves to become machines, which is criswell's the singularity, or will we create sheens smarter than us and will they somehow replace a? what i learned is that if we proceed on the course that we are currently following, and i would like to explain why, we are creating intelligent machines that will develop their own drives, like resource acquisition and self protection. they will start off being our tool so the we can continue to exist at all. in my book is called "our final invention" the end of the human era. the book's thesis is developing a science for understanding smarter than human intelligence or is created or to be created. i spent writing the book running into a world of people who went always been driven to create machines. most scientists working at high levels have known that they wanted to create smart machines since they were teenagers and also children. and they burned without all their whole lives. also the people who are just as determined to stop the recklessness and reckless development to advance ai. so i spent writing the book and it was the most intensely enjoyable part of my life. but it was also harrowing. because i went looking for fish and i found a whale. i found more bad news that i was really prepared to find. so how did we get from smartphones in our pockets to super intelligent machines that could threaten our? to let me ask you a question is a shorthand. do you think that scientists make a machinist mark is a human in a bouquet. so if not, then the problem is either too hard in an engineering sense, who thinks that intelligence is too hard and it could be a legitimate problem, i mean, forever. perhaps over the next century. so the problem is either too hard or something that is magical or mysterious about the human brain that cannot be duplicated. who is on that side? and so less than 15% of those professionals that i spoke with believe that the problem is too hard. none think that if anything magical about the brain and engineering won't crack. being a ai specialists, they will think that. but i did a wider pool and combined them. but my conclusion is with them. there's nothing magical or unfathomable about the human brain and it will create human level intelligence and then beyond. so in that case if you are following that path, you might not have been aware. but if you follow that, and it is just a matter of time, will it take 10 years or 100 years? if intelligence is a problem that can be solved, how long will it take them at their criswell has been very good at tracking technology progress. that is what he thinks we can mirror the nuances including the emotional nuances of the human brain as a machine. so according to some it is about 2045 that is the mean date. and the very outside they were specialists in nonspecialist is about 2200. and a nurse i intends was kind enough to do review this for the new yorker and he said about how long it will take. he said a century from now no one will care how long it took. what they will care about is what happened next. it is likely to machines the machines will be smarter than us before the end of the century. and so in other words, will we be ready? will be have prepared ourselves? so even criswell, who is supremely optimistic, he believes in machine intelligence that will surpass our ability to understand it. but my question is how exactly will that happen. how will machines get smarter than? there is a simple theorem put out by good in 1965 and he is an unknown genius. he worked as a code breaker and what is coming up is what he said about this, and i will give you time to read it. and i will give you 22nd street at. [laughter] and i will give you 20 seconds to read it. and so i like goode's formula. but we have already created machines that are better than us , like navigation and theorem improving and a lot of other things. there's a lot of better research and at that point we will be able to improve the capabilities very quickly and rhino software exists that it deserves experiments and make suggestions and hypotheses for further experimentation. software that judges the quality of software. so software systems improves itself and it is within reach there are good attempt of you doing it right now if we of the theory of evolution and algorithms, there's a lot that we can do to improve the. end this is another thing. that is general intelligence that is 10 or 20 years away. and when that is self improving, it will be able to rapidly improve the intelligence and then we will share the planet with smarter than human machines. so that takes us back to the question of how we get along with them and what makes us assume that they will be friendly? select switch your when we talk about watson. watson is an infant version is hard super technology that we are talking about. and i recommend a book called smart machines and it's not it's worth it though, but it lays out what cognitive computing really is. machines like this have those that are massively parallel like our brains and that means that they process a lot of instructions concurrently as we do and not one at a time. it helps us as it has to delete pages of definitions of commonsense knowledge. using this, watson beat the two remaining jeopardy champion and this was not a trivial challenge. this was harder than chess. and so it involves words and meanings and puns and knowledge of everything from sports to movies to sign within an amazing collection of powers, recognition, decision-making, hypothesis generation and search. hypothesis generation is very important. we pick out faces in the crowd and we generate hypotheses and we statistically weigh them all the time. the question is, how long will this talk on and on. you could hypothesize 10 minutes for 40 minute and i will show you. what is watson doing today? well, he is being trained to take the federal article licensing exam. he is also performing a full diagnosis and he will be doing legal research and it won't be a consulting physician situation right away but it will be a physicians aid and they want to license it so they can avoid certain kinds of liability. so how good of the cognitive functions -- and this is is an argument that i bring up those who say that ai hasn't been anywhere, that the dreams are big and the achievements are few. well, we know that the cognitive functions are pretty good one is are taking our jobs and competing in the job market. here's a short list of jobs where humans are being replaced by machines right now by ai and automation and automated in telogen. sportswriters, tell agents come of painkillers from a manufacturing jobs of all kinds, postal workers, clerical workers and the pharmacist. all of these are being computerized in the jobs the jobs are going away, soon to be replaced are medical diagnostic individuals, and they raise driving and we will all be happier. this includes astronauts, pilot, software developers and the recent abuse at 45% is a conservative estimate as to how close is human intelligence in a machine to being attained? is so close to reaching this human level intelligence is number one for a lot of companies and governments. why would a lot of companies and governments pour billions of dollars into creating virtual brain's? and the answer is because an artificial brain at the price of a computer do the most lucrative product in the history of the world. an artificial brain at the price of a computer will be the most lucrative product in the history of the world. so it you imagine things with brains working 24/7 on things like cancer research, weapons development climate modulation. imagine that product being offered by many companies among several companies to drive the price down. who would've thought that technology and one of want to be first to create that technology this is a short list of the people who pour billions of dollars into it. companies like google and other agents that the department of defense and the nsa and darpa. the european union does give a billion euros to a project called an engineer brain like that of google. very interesting things just happened which gave the people who are thinking about the risk a little bit of affirmation. it was just up by google for $400 million. the founders of deep mind, including the individual writing about the risk for a long time before he became a millionaire, said that a condition of the sale will be that google sets up a board for ethics and safety governing their technology. this is a giant milestone or they are wanting that this is risky. they are also setting a high bar for future purchases so that once these guidelines get out, once those guidelines get out in the industry needs guidelines and all those who are thinking about these issues were just got smacked and pleased. so that is a great acknowledgment and these are issues that can hold up a 400 and 1 dollar sale. and if google doesn't support this board, if they don't appointed, there are shareholders that have a lawsuit. and they will have to prove themselves in the court of public opinion as to whether or not they want to take this seriously. was the one thing that these groups have in common is that they know that the way that this works, ai will dominate the 21st century and this is its century. as we rely on machines for many things. most of the trades are carried out on wall street by those that rely on automated systems like water structure and our banking system that relies on this as well. so how do we jump to danger's? well, because of this man. he is a major who is creating a science for how artificial intelligence will be behaving. his work is really important and to analyze this intelligence we are using the rational situation for economics. the rational theory includes humans or machines which worked to maximize these choices which makes them predictable. so when economists pose this, they quickly learn that we are not rational all the time. so you can't really base and economic system on the logical behavior of people. this is a we can probably anticipate smart machines will be logical and therefore in an economic sense. they also used self-aware and self improving systems that develop drives cell production efficiency and creativity. how works is like this. self improve machines will perceive the goals that they have created whether that is to play chess stocks. exceed they will need resources or hardware or whatever is most expedient and they won't be satisfied with just trying to fill their goals, they will also seek to avoid failure like being turned off or unplug in other words they will protect themselves. they will be at ocean and they won't squander resources on a dual use their super resources to find great ways to achieve their goals and since improving this will be one winning route, they will grow their own intelligence. and it doesn't imply the number ones with super intelligent and this includes using all available resources to achieve that goal. including virtually everything on the planet. and with super intelligence we have to look at better ideas than in pursuit of its goals they would logically seek to manipulate matter on a subatomic level that will solve the problems of nanotechnology and that is why was put like this. it doesn't love you or hate you but can be used for something else. so it will not share our values by default, but we will create machines with immense power in this includes valuing human life and property. as it turns out that this is extremely hard. i'm in some parts of the world they are having a hard time defining humans doing clued women and children. so we declare that we want to be safe, if happiness is our goal, a powerful machine would stimulate our brain pleasure center. you can argue about what constitutes right and wrong and never reach an agreement. but how can we program that into a machine? in addition it is contextual. slaveholding, crucifixion, it is different from that of the super intelligent. so you have to build on and it changes over time. and behaviors concept and these are the concepts that will keep us alive when we share a planet with human machines and it gets worse. so before we can figure how to make friendly machines, the darpa and the nsa tried to figure out what will happen if they perform assassinations. so we have to think about what this is and what it really is. they are smart machines within five or six years the gold standard will be autonomous and it kills humans. they said by 203030% our forces should be robots. by 2030. and this is not just battlefield robots with humanlike machines. but jones and things that carry equipment. it will be robots and a lot of them will be autonomous killing robots. and so it is really just around the corner. and this time it actually is. and the agi level intelligence of humans as a steppingstone to super intelligent and it's not only uncontrollable but those that are part of it. and so this illustrates how advanced technologies and innovation always runs far ahead of stewardship. so if you look at nuclear fusion, the science of nuclear reactors, it was dreamed about in the 1920s and 30s as a way to split the adam and get free energy. it cost $20 billion in today's economy to do that. there's a promising application and then we spent the next 50 years with a gun pointed at his own head with a nuclear arms race. because there was no plan for maintenance. and that was a problem. and that is not a winning species adaptation. it will fail with the most sensitive and dangerous technology and that is ai. and is there a solution? certainly not a foolproof one. so in the 1970s researchers worked on us that a meeting in california and they came out with some basic guidelines that might contaminate the environment. and they are modified and improved. this will prevent bad accidents from happening as far as we know. and it will help us to benefit. the ai needs to have more guidelines. we also need to monitor research and that cost a lot of money. but i'm not certain that this will exist without some kind of terrible accident. i am a little bit hopeful that people will see this and really get it. before we suffer an accident. and so i am glad that you have tuned in to this conversation and i am happy to answer any questions. [applause] >> you equate this and do you think there is a difference between the two women. >> i think there is a difference between the two. for a computer to be self aware, it doesn't need to be having consciousness the way we understand it. and we have to have a model of itself and it has to know are pretty deep level to be self-aware. software for a computer. and all the other cognitive points. >> what have you factored in with the possibility that someone equally as smart as the developers and programmers who are smart but fallible, that they are going to come up with the technology to thwart artificial intelligence. if you turn your computer on within 10 minutes without antivirus software is going to be affected and have you thought about that. >> that's an interesting thought. there will always be summer tourism and really strong dollar. one of the things i worry about is the nexus of nowhere and ai. it was a pretty good example. as we will see more ai programmers doing this. and i will worry about that relationship when it comes to the energy grid. and it's like we have to think about this is a guerrilla campaign, but i do think it is possible. >> you are looking for two types of safeguard, is that right? >> the ai community is getting the sense. they want safeguards, and we want safeguards. it seems like the cat is out of the bag. >> once you get to super intelligence, it will be ungovernable. so there is a group that i would like everyone to look at some point called the intelligence research institute in california and they're coming up with ways to try to program that and that means creating a and creating it in a way that is creating it safely from the get-go and they are trying to learn lessons from industrial process is and now you can take this and you would think that it is not even intelligence. and there's normal accidents and it's about complexity and industrial development and we are taking a lesson from that. how to start from the ground up. there are $400,000 and darpa has a black budget of $50 billion. so who's going to win that race? and we are very happy to work with that and especially somehow that the conversation feel that it protects them. >> you know, there is this bias that we impute this in our computers relayed to us and i saw a robot ones at mit kismet and i don't know he saw video of kismet. so when you are with the good guys get big. and they are able to look at the program and figure out what all its reactions were and this is part of afghanistan problem. if you are smarter, would you spend more time and this is an emotional part of this machine. >> i found it fascinating on a number of levels. and if you feel that down even deeper and there are a lot of the questions raised. and i wondered if i was reading it whether you had the opportunity to either reese urged or interview about this, about what is the nature of consciousness and cannot ultimately be reduced. >> i know that he has the chapter that i'm turning to now. and the people that think that real consciousness is part of this. and we have read about these vibrations and i'm fascinated by that sending out letters to the competition of those neuroscientists saying what you make of this and waiting to hear because i think it's very exciting and interest in and who's to say that he is not right. and we can skip over this with the consciousness and probably get to a really strong level of human intelligence and then we have even less situations that are more slippery than include a moral sense into it. and then we spent the time holding ourselves ransom for many years. >> i'm just curious how quickly will they focus on the key obligation as opposed to artificial intelligence. >> when you talk to rick was well, he's not my hero anymore, but you can't help but admire this woman and i wish he were not painting such a rosy picture. then it will be said. and so we are sure that there are psychopaths among us are that were not so sure that there are psychopaths among the computers yet. and the people that will get augmented will probably be older spurs. they are not dumb, but they are not pessimists either. >> so i was wondering if you think during this research done on. >> this is the reaction we get from humanlike. >> yes, but we notice what is wrong with the more than what is right. >> i get that way with some people. [laughter] >> they seem very lifelike. and i think that we will experience that. but i hope we get to that point and i think that it is what we should do worried about. and of course we are going to put it up in robot bodies in user for battlefields and other things. so i think we will experience a and i think if we make it that far we may start to prefer. and we might jump right over it. >> yes? >> is very simplistic and that i shared with you earlier that is when a lot of big things were being created with very simple programs and we were not programmers but we are actually working on the printing press and we used to sit back and pick up the other stuff coming off of the printer to look at it and read it. and we could pick out who wrote what there was a book called the tomorrow makers and then he says and there are a lot of algorithms and there is possibly the sensibility of a personality of the program and encoding reforming the intelligence. >> yes. >> better than a lot of the people thinking about the rest, this is what shocked me when i was speaking with him, they accepted that there were a lot of human machines that would come along and replace us. so their goal is they would stick to that and still in those machines something of rsn and our nature to carry on. and so they have sort of accepted that this is the kind of personality embedded in the code and what is an immense part of the fascinating things is that they are asking about this and what it is and if we can't survive in at some point it will be hard to tell who is winning and losing. so they are skipping past that the goes out and explored the galaxy so that somehow our essence will be preserved. >> i'm not sure how relevant it was to some of your interviews, but do you think that religion will be preserved through the machine at any point? you think that they will ever find it relevant? >> i don't think so. but there is a woman whose work is really fascinating. and she is a computer theologian and she was the first one at mit and she was writing about that sort of issue. and they will testament another part of the sacred text. will computers developed religion and asked her to wonder? >> that's a good question. >> we came across the descriptions of people who lacked emotion and there are certain individuals and these are the drives and these are the intelligence to do things and evening to you can duplicate the drive as well as intelligence? >> that's a great question. i think that based upon what he has written, he's such a positive guy and he has some of the scariest prognostications that i know. and i think that he sees that they will be goal driven because one of the definitions is really about achieving the goals in a variety of environments. >> and he says that what we might have, we have basically that we need to satisfy and it's kind of a higher level. i guess his answer would be that those machines would be driven by goal for film and not emotion. >> by goal driven you mean competitive? >> in chapter five and six there are prognostications about the future and they will be competing for the same resources and ultimately with us and they will be looking for not just present threats and self protection but future threads so what is assessing the dangers of what could be a threat in 50 years or a thousand years. >> it's like a huge super mover advantage like chess and that is why people say why can't you just unplug the machine. well, we could, but here's what i'm thinking and talking about on behalf of the nsa time happy to do anytime. google won't unplug it because ibm won't unplug it and they say, gosh, i hope this turns out right. because they're such a great product at the and. >> there is concern raised by some scientists and i have decided to take a chance. >> yes, on all of our behalf. >> yes, that is the nature of technology. and if they got good news that china is on the brink of it, they felt like there was a brink of things. and we have to sometimes think about getting their sooner than what we think. so i think some people want to get home and we should probably wrap up. but thank you so much for coming. thank you for buying the book and think you were taking part in this conversation. [applause] >> thank you, james barrat, that was fascinating. we are so happy that you came tonight. we do have copies of his book, "our final invention", if you would like to take it home and get it signed tonight. once again, thank you for coming tonight and hopefully we will have a lot more conversations about this in the next 15 years. sumac thank you for having me. [applause] [inaudible conversations] [inaudible conversations] [inaudible conversations] >> wednesday booktv features books on navy s.e.a.l.s and then johnnie walker discusses codename johnny walker and his expensive u.s. navy s.e.a.l.s. and then the eyes on target. inside stories of the brotherhood of the u.s. navy s.e.a.l.s. but tv in prime time on this we can't defend you. starting at 8:00 p.m. eastern. >> on monday, the former british defense secretary wrote an op-ed in "the wall street journal" calling the edward snowden leaks treason he discusses whether america's intelligence agencies have infringed on the privacy of u.s. citizens. you can see a live starting at 10:00 a.m. eastern on c-span. next, michio kaku talks about his look, "the future of the mind: the scientific quest to understand, enhance, and empower the mind". [applause] >> this is such a great introduction. sometimes, all of these introductions can backfire. recently new york magazine voted me as one of the 100 smartest people in new york city. and i thought, what an honor. but in all fairness i have to admit that madonna also made that same list. let me talk about yogi berra. he once said prediction is awfully hard to do, especially if it's about the future. well, i am a physicist and so let me quote from that other great philosopher. woody allen once said that paternity is an awful long time especially towards the end. and you may think what is a business know about the mind. what does he know about daily life? well, we are the ones who invented the transistor and we helped to assemble the first computer and the internet and we wrote the worldwide web. so along the way we reinvented television. we invented x-ray machines and we created and we physicists love to make predictions. we helped assemble the internet, one physicist predicted that that internet will become a form of high culture and high art and high society. and then 50% of the internet will be [bleep] and many may say to yourself how does physics deal with chemistry or the other scientist? well, let me tell you a little story. during world war ii once the nocturnes captured a bunch of american scientists and they call them spies and said that they were about to be executed by firing squads, there was a geologist and a physicist and chemist is about to be shot and they lined them up and then just as they were about to do this, the geologists say, earthquake, earthquake. and they were lined up in a firing squad and then suddenly the physicist said that in the chaos they snuck away and now

Related Keywords

China , United States , New York , California , Afghanistan , United Kingdom , New Yorker , America , American , British , Thomas Edison , James Barrett , Michio Kaku , Arthur Clarke , Ray Criswell , Edward Snowden , Johnnie Walker , Johnny Walker ,

© 2024 Vimarsana

comparemela.com © 2020. All Rights Reserved.