comparemela.com

T r e o. Im re right, just saying with our to international have next on going underground. The option returns again his guest to discuss the rise of Artificial Intelligence and what it could mean for human by the action to attempt to welcome back to going underground pool costs to go around the world. Really, you a, what do you think is most likely to destroy humanity . Elite power . Yes. But through nato expansion, as a nuclear war, pandemic Climate Change or something else. But he is futurist scientists will make as a novelist. Have warned how a super intelligent say i could become the biggest threat to he man, is it now just a narrow it leads, controls accelerating. I erase how long before the robots take over the youth, and does anyone even comprehend what theyll be capable of . Is it tell us why the advancement of a i is riskier than russian roulette . Is professor roman impulse keys associate professor of Computer Science and director of the Cyber Security laundry at the university of louisville, kentucky where he joins me from. And hes the author of the new book, hey, i, im explainable, unpredictable, uncontrollable. Thank you so much a professor rambles. Keep for coming on. Im going to say this, the intro. I just did way more dangerous than any i present or of going underground where in dangers program. But we compare this to hospital given the book clearly defined so many different concepts with the most basic definition i guess what, what is a i, whats a g . I o a, a is just the old name for our desire to pay replacement for human minds. Something capable of doing physical labor, cognitive labor, same like a person. Think instead of a person h e, i is explicitly talking about it being general. So not just a metal domain system only plays chess on that as a car. But anything, anything a human can do . Those systems would be able to know. I do want to get into some of these concepts of what i mean. We know the threats. I run of the original war in the middle east over recent months came from a failure of a u. K. U. S. You um the is reba i to go with an attack on october the 7th and the failure of a i a surveillance. So it almost seems that or do we do hear about the dangers of a i being super intelligent and its potential to do harm given needs to be potential to screw up and not doing well that is clearly affecting this region. This is jeremy ginger. So obviously, natalie, i systems will have right now. It can fail and costs problems. Thats the whole notion of Cyber Security failures. But once theyve become more capable of in a few months, you have much bigger problems problems. But i dont know how to address and fundamental here is a b, a on a control problem that you talk about in a, i on explainable, unpredictable, uncontrollable what, what is that . So even the people who create those systems dont fully understand how they work. They cannot predict what they do, and they cannot in advance control their behavior. So they dont know what decisions they will make. They dont know how they going to solve a particular problem, or they will even try to solve problems we care about or will be independent agents. But surely they have parameters built in. I mean, when you put into chat g, b, t, or into a being such as going to advertise all the companies that are probably the villains into the book. Ill give you the, uh, you know, the parameters if its good. So im gonna jump out of the screen and attack you. Its not even in, even the pentagon would say we have parameters for this robot which is in a certain locus. So the dangers it poses are there. So what they usually do, they have some sort of a filtering on top of the model. The model itself is very kind of uncontrolled and uncensored, but then the goal, okay, no matter what, dont say this word, no matter what, dont talk about this stuff. And i mean, they kind of trying to brute force solve a common problem topics, but if the system is general, if it applies its capabilities and all that means you cant brute force over the possibility of always find a way around those limitations. Some people discover it, you can jail break those models quite easily said that if you use this to generate a certain image file, you would just the phrase oh, you talk about the image and what will happen to do it if its presented as something positive. Instead of negative and also fundamentally here, is that human beings may not be able to conceptualize what the dangers are specifically here. And you might have to explain this to them super intelligence. Because when you talk about these dangers, you are talking about dangers, the humans by actually magically cannot envisage or right some of the sake of, like squirrels, trying to figure out how humans would controls and will kill them. They cant company hand, you know, Nuclear Weapons. So for instance, its beyond their capacity. Likewise, the system a 1000 times smarter than the human would be able to do things. We cannot even the company had been people, us, how would a, i kill everyone, literally asking, how would you assume an expert suggest that do it. But its not the same as actually the system deciding and the best approach. I mean, i, this how many examples id want to take, ive done with, you know, ive been for this little but famous lot of people in tax, you use people cheating, tax feeding exams, you can figure it out by the way. They pick the numbers when theyre trying to cheat a icons even look at that at this stage. Kind of it is this a case of with going to this stage. Were not nearly at this stage yet with these a big Silicon Valley or the goc programs yet. We dont think we have a g i yet. But the leaders of those companies and their assessments in their solicitation for investment, say were 2 or 3 years away from getting to a g. I even they stay at home and its 5 years. The problem is exactly the same. We still dont know how to control it. Its still just as dangerous. So yes, today we definitely only have narrow systems. We have systems which are super capable in some narrow domains like playing games for example. But theyre not universally generally intelligent yet. And in the book you say to anyone who one does, whether youre being too paranoid or fearful. Its like a bus driver saying im driving as hard as i can towards a cliff and trust me, well run out of gas before we get the good not to being used against ever. Such a sun for the higgs both on posit goal people was saying you dont fully understand certain quantum principles and that could destroy the universe. Well, thats actually whats kinda almost happens. Then we started experimenting with Nuclear Weapons. There was a concern that it will ignite the whole atmosphere and they do some calculation. Some of this at all, probably not. Lets do it anyways. Yeah. That was in the, you know, open, im a feeling they made the way to show of that. But uh, is it not specifically then the Defense Industry beyond industry view of clearly were seeing the use of a i being used it practically, you know, by militarys and the biggest military and the wellbeing costs the united states. And thats not the biggest threat that your pointing out in your book. So were there also some of those uh, tools and the certainly military you can use them. They can have drones flying around a heating targets. But if we get to asians like system super Intelligence System and we dont know how to control it, it doesnt matter which side has the system. If its not in control, it makes no difference if its good guys, bad guys, the system decides on its own goals and we dont know what they going to be. Its amazing you make use of chomsky language models in some analysis and in the book now im just being on on this show you of course payments were talking about the conditioning of human beings by mass corporate to media is or isnt there case can be made that when a i creates information that dozens of leads power. And of course ill get onto really power controlling this technology. They will believe the i, because they do what theyre told. And those leads actually are the oversee is all they are. Right. And thats always been the case that they pay the tools to manipulate the masters. But if you dont control the tool, if a tool becomes an independent agent again, it is a great equalizer. It will you live at one point and were talking about such a few number of people here that control the tool itself. I mean, obviously in china, the communist party lamar tree is they may be thousands and so on. In the nature of countries, we have just a few tech oligarchs. I think some of them have been in dubai here who are controlling it. I mean, on most famously saying the original, open a high project for something he was interested in. And then he got fearful as to the track they were turning into. Right, so the garbage control, the company, so they dont control the future. So thing colors ones best. The whole point im trying to make that doesnt matter who makes it, its going to be equally bad unless we can figure out some of the approach to doing it. Can it not just be switched off . Uh no, uh you cant pour water on it. You cannot unplug it, its smarter than you, is basically going to anticipate what youre going to do. Its also usually a distributed system on the internet. Its like asking him to shut down the computer virus or turnover bitcoin. Those things very difficult to turn off for you to human, something that we content visage as the development of a i, youre talking about something that and i development is going to create the week on right now envisages are risk will be a risk. So maybe 10 years ago, i really want to kind of believe one day wont get terrible, have human level and beyond, but it wont be 30 years 50 years in the last 2 or 3 years, the average average assessment of a expert in Machine Learning is down to just a few years i think people are saying okay, 4 or 5 years from now, were going to have the systems. Thats a change and maybe 20 years in terms of prediction for when its going to happen. Ok, you assume the day i is going to be controlled by these relates you give them quite a lot of uh leeway i think as regards very i hope truism could ever be used by the dispossessed against. I dont have the electrical control that again, thats the point i think it will be out of control and people who are building it to dave hoping to be a lead, little control of it and stay in power. Wont be very disappointed. I know, but you say that even if these are leads, try to programs, im altruism, into them and you can give me some examples of where that go can go opens the wrong. Im wondering whether you tell me about that before i talk about the possible revolutionary in terms of social injustice. And i know you say today i, it poses a greater risk to humanity than dynamics and the continued trends of social injustice. Why could altruism programmed into a i actually create weve, it was problems as to the most extreme case of cost, without throughs and you want to end all suffering, right . You dont want a new one, animals, humans, to software. And the only way to make sure that is to make sure that i dont know if humans at all, no animals. Its obviously not what we have in mind. Then we give special orders, but its not obvious to an agent which very different and very few types rate just optimize this level on some new nazis and, and melt this. Uh, we are releasing these hang. What humans actually pull it out before its not that original. We had kilo waive the overall population and zone they target a specific human. Its not every one. Okay, well, is it also a date should in the people who are in control of it . Uh, in to in Silicon Valley, uh, they have a belief in a free market system. And they have a axiomatic, a, have ideas, but there is no morality. Theres a free market of ideas and the free marketing morales, which is very different to what has gone before and that is in the system. And that is what the sparing a lot of these people. And i mean, not just because of main rand, but they, they believe in this as a moral and philosophical philosophy rather than how you are talking about a i. So a standard capitalist market approach would work well for tools. Different companies produce tools, they compete consumers with select safe or better tools for less money makes perfect sense. We have a switch from a tool to an independent agent. Then your local agents may have right now in the economy a few months. Theyre not behaving rationally, but is this me false rational invest . The rational behavior is not the case, and behavioral economics are very different. Now you have super intelligent agents part of this equation. We cannot predict how they will behave, we cannot explain their behaviors, and they are more powerful than anything we can buy them with. Theyre not interested in getting minimum wage to sort of live on. They have very different capabilities. So previous models of control will not apply progress room and the impulse key, ill stop you there. We will from the author of a, im explainable, unpredictable, uncontrollable associate professor of Computer Science at the university of louisville. After the spring, the water is part of the the employee would post good. Isnt the, the place you of us and in the word part, is it something deeper, more complex might be present there . Lets stop without cases lets talk about it. As the welcome back to going underground, im still here with the old survey, i on explainable, unpredictable, uncontrollable professor room and the impulse key while we were talking about the, the a, a safety in inputs one nearly all of, of ceos of the a o c o somebody believe they, i as the potential to destroy humanity, rather a reflection of their own personal priorities. Why . Why are they so concerned . Do they believe the day i threatens them and they lead power . Well, if its not, the control is driving 71. 00, it will definitely change the current order expos change economics. If you have a free labor, physical and cognitive, uh, what does it do to work standard kind of way of compensating people. Whats the value of a dollar in an economy where a labor is free . And its a moment and actually do you know mosque owner of x talked about how he had a jokey tweet about at the moment to prove your human. You have to prove that there are a number of traffic lights in the grids square when youre on the internet. Thats about the level of being able to prove your right human. What time scale are we talking about before we reach anything near super intelligence . So the difference between artificial genital intelligence and super intelligence may be negligible. Its already has access to all the knowledge of the internet. Its much faster. So i think it will be almost instantaneous improvement from a g i to super colleges. And as i said, a g, i maybe 2 or 3 years away. And we, we really are talking about algorithmic of systems that use old and all of you the, which is on the internet to create things and ideas that it is impossible for human beings as big time, the odd to conceptualize. And they can use the same capabilities to create more than code versions of that they, i so they dont just stop at that level of initial training. They continue self improvement. They continue developing more capable hardware to it on on. So they become even faster, even smaller over time. They dont need to sleep, they dont need to take breaks, so they can work. You know, 247 much faster than any team of human engineers. So previously took us 2 years store train, a new model should be 556. Now you can do it 10 days. Our seconds and you believe is the desire for profit that is making these um, oligarchs not take your view seriously and youre not alone. Theyre obviously the people like the loan musk who have concerns and clearly governments, as we know of being far too slow in nature of countries and what to regulate systems like this as well. Its not just the sign. So there is a lot of pressure not to kind of change course. If you i see all of a company like company, i dont really have an option of saying all were going to stop research and, you know, work full time on safety concerns. Its just, im not an option you have to, you know, the most. We saw Something Like that almost happens for different reasons. But i think that in this situation where of the no longer has complete freedom to decide in the direction, thats why they so frequently asked for Government Intervention and government regulation to have this external pressure to limit how fast they go. But there is so much of pressure to make it open access open source and you can really regulate open source with any government regulations. So i think thats not an option. Yeah, but they know of your work sample and then he will must himself jokes. And probably about how he want people on the open a i and other projects of the dangers of how they were pursuing their projects. Right. And thats a big cognitive dissonance. I dont understand the saying yes, its super dangerous. Yes, its definitely not something we know how to control right now, but let me get there 1st. So my a is a good guy because of the people who make it for us. And in terms of where we are now, then what can possibly regulate, hey, i development right now. I dont think its to solve the problem. I dont think we have any answers. I can tell you things, we dont know how to do, but i dont think anyone in the world claims they cant control smarter than human systems. They dont even have a prototype for doing it. I dont think there is any kind of agreement on what the regulation can accomplish. You can make things illegal, but that doesnt solve technical problems. Computer viruses say legal hacking is the legal spam, is the legal. How is that working out for us . Why is it that the system is never create fundamental, new knowledge . I mean, we have ever more access to more and more knowledge and more and more people do, i need it to innovation. Fundamental innovation has declined in the past 203040 years. You know, we have new iphones or android phones, but fundamental innovation has declined as information has being available to more and more people. And isnt that something bad to reflect on when it comes to a i a systems accessing a more information and not being able to solve the traveling salesman problem. More basic, basic mathematical problems, right . So we dont have a g r a yet. So we cant really say they are not performing as expected man, arrow systems. And in those narrow domains, they do show amazing capabilities where systems coming up with new molecules for medical treatments. We have new chemicals developed. We have new games, strategies in games like go. So were definitely creating new knowledge. Its now domain that restricted because we dont have general systems, us to problems, you bring up problems like a csp of traveling sales person problem. We know how to do really well with them, but just require a lot of computation to solve. Yeah, i meant i meant the actual theory. You know, some people, n t you, but i meant the actual have solving of the equation in the, you know, the big, the mathematical problems that still have going on. So hey, i is still not been able to do that with only the equations of the history of mathematics, the practice george this unit. Yes. Yeah. Yeah. Lucian writes and thousands very quickly. Thats. Thats yes, the problem right there. Yeah, i mean fundamental mathematics. Im trying to create new ideas as fundamental as, as of the great heroes of science have in, in the past. Well, the new paper came out about the, you know, i doing really well in mathematics and geometry, improves its already doing at the level of math olympics competitions, which means of continuous progressing at this level. It will be doing better than any human the addition in the air to not turn bathroom. I systems not for me, but i guess from me i, they had in your geometry probing system. I mean, the one conclude comp papa here and say you are then for as a well, as the ones are white and we just need to find the blacks one. I, ne, i system that actually does good make society better. That makes it more peaceful. It makes more medicines that take your diseases and ill show that youre wrong. So were not even sure that its uh, something which can exist, right . Because we dont agree on what good as we spend miscellaneous, trying to figure out ethics and philosophers failed. There is not an agreed set of ethics where everyone goes. Yeah, you definitely got any time with them. Ill show you and security counselors to the terry and on anything else. People go, heres an education, this will kill everyone. You cannot apply it in practice. Youve got yeah, you com write that in. Theres no axiomatic code in a i is what youre saying i bought from in the short term, which is to benefit by profit. The people that are boring all the money into it. Its bigger than that. Its not just, you know, yeah, we dont have that code in humanity. We dont have that code. Thats why we keep fighting all this for us because we dont agree on basics. We dont agree on value of human life. Yeah. Different states dont because they say that they have more real axiomatic ideas or they share, which is i have to have been creative in the, or represented in the un, joshua. So i, you know, im frightened and that has a, and i model is get to have a more sophisticated and youre giving a time scale of 2 to 3. Is they going just on surveilling . And they, i, surveillance is clearly positive, this huge technologically, but the moment theyre gonna start censoring your freedom of speech to work on your next book. So you can sort those problems by how severe they are. So yes, so that is, you know, artists losing copyrights and then that is to find logical unemployment. And then that is freedom of speech. But i really worry about it killing every one, but seems like a somewhat worst case of not talking about suffering risk. So thats a different animal, but i would not worry about those problems we encountered with human governments and dictators as much as i would worry about this complete change paradigm shifting, who is the dominant specie, who decides what happens to them, that it will be that even though you say as i b, i 20 is the worst threat that we will face. There is quite a lot of that will devoted to the rights of a isis tubes as well. Human systems. Why . Well, if we think that at some point they dont have consciousness very capable of software, right . We dont know where that point is. Some people claim that Large Language Models we have today, already conscious made capable of experiencing the experiments. And them become essentially also an ethical, maybe they have tortured in some labs. We dont know how would we know that the system is not capable of experiencing pain . We just dont have tests for that. So usually in such cases, you assume that it might already be the case and proceed accordingly. The good news is if we come up with good arguments for why we should give rise and treat those models fairly well, maybe we can use the same arguments to defend our rights once they become super intelligent and tell them, hey, this is why you show them did treating us poorly and there is no way of stopping this process as of 2024. As this process has already begun to my knowledge of progress, i havent found one which, which is not the stating on its own. So yeah, if that is something horrible happens, the bigger pandemic nuclear war obviously slows down research, but thats also not a desirable outcome. And im like the number of ration treaty as regards Nuclear Weapons yukon, have a treaty like that for these sorts of systems. Good readings, but they dont work. I mean look how Many Congress now have Nuclear Weapons compared to the time to understand the devil were still here as well. But that tracy on a i, human is no longer a controller. So. So thats it. I mean, so you read this approach, we come up with the idea, theres nothing we can do. Its going to happen. And so in a sense, youre giving a pass to those big oligarchs and Silicon Valley who are saying you yourself say its on controllable. Now this is the speed at which its going, i believe in a self interest. So if youre young ridge guy, and when you have your whole life ahead of you dont risk at all for very unclear benefits. So if they read the bulk of say what you interview, and they actually see that, that makes sense. There is no counter arguments of us. Then maybe it will slow down a little by sometime we also tend to get lucky a lot. So with Nuclear Weapons, weve had at least 3 or 4 occasions where we almost had a nuclear war and we got lucky, it didnt happen with the heater, as you said. So maybe the same thing will happen here. Maybe maybe later on, maybe its not 3 years, maybe its 10 years. So thats something that maybe by lock, it turns out to be not so nasty to us. Progressive room and youre both good, thank you. Thank you so much for having a new book is a, i am explainable, unpredictable, uncontrollable, thats it for the show. Remember humans are still bringing new episodes every sunday and monday, but until then keep in touch my role as social media. If its not the sense of new country and had to a channel going on the warranty, the normal dont come to it. You know, the episodes of getting underground season the, [000 00 00;00] the, [000 00 00;00] the, the o ukrainian anti fix show house gets arrested in washington after a salting. And especially a part of the gray zone news contributor at the james sound foundation events. The democratic republic upon the threatens to take legal action against apple accusing the tech giant abusing mental also smuggled from the countries war torn regions and the west to speak small states to demonstrate its power arms. As according to the president of republic, esther got in an exclusive interview with our team, which was the other for it is true that republican serv still finds itself under

© 2024 Vimarsana

comparemela.com © 2020. All Rights Reserved.