comparemela.com

Card image cap

Editorial board of the Machine Learning journal the cofounder of international Machine Learning society and the winner of the ctk dd innovation award. He is the author of over 200 technical publications on Machine Learning data and datamining and other areas. Tonight he joins us and what ive learned as a launch event for his new book the master algorithm how the quest for the alternate learning machine will remake our world. Please join me in welcoming to town hall dr. Pedro domingos. [applause] thank you all for braving the traffic to be here. I will try to make this special preview part i want to tie about the change happening in the world today. This change is as big as the internet, the personal computer. It touches every part of society. It affects everybodys life. It touches your life right now in ways that you are probably unaware of. Fortunes are being made because of it and fortunately jobs in many cases are necessarily being lost because of it. Children have been born because of this change. This change is the rise of Machine Learning. So what is Machine Learning . Machine learning is a computer learning to do things by themselves. Its the automation of discovery. If computers Getting Better over time the same way that humans do. Despite the Scientific Method making observations from leading hypotheses testing them against data refining the hypothesis more than any scientist ever could. As a result in any given period of time machines can use more knowledge than any scientist could. Now the knowledge is neither is deep as newtons laws or the theory of relatively at least so far. It tends to be more knowledge but knowledge is what our lives are made up, things like what you search for on the web . When you go to amazon what do you by . Machine learning helps us understand customs. Machine learning helps define books to read, movies to see, a job, new house. A third of all relationships lead to marriage today begin on the internet. And Machine Learning algorithms could pose potential problems. There are children alive today that wouldnt have been born without Machine Learning so perhaps perhaps is not so mundane after all. The mcgiffen of example. We use Machine Learning to understand what you are saying. He uses it to correct spelling errors, to prevent what you are going to do and to make suggestions. They can even compare and act accordingly. All of this is happening today with the help of Machine Learning. And as it progresses things like cell phones and other systems need to learn more and more about you and his results better serve you as well. With all of this economic value in Machine Learning its no wonder that tech companies, all the Major Tech Companies are examining Machine Learning. Google recently spent half a Million Dollars her company that has customers and products because it has better learning algorithm. That alone is worth a half a billion dollars every year. As another example ibm just this month paid a billion dollars in a medical imaging company. Not because of what it does, but the cause they want to have access to its vast library of medical images so they can train learning algorithms to things like automatically diagnosed her as cancer. The kind of thing that today takes highly paid doctors to do. Machine learning expert are very highly sought after. The director of research at microsoft says that what you pay to atop steep learning expert these days, deep learning being a hot area in Machine Learning is similar to what you pay for a top nfl quarterback so the geeks have one, finally. Now well and good is all that is Machine Learning has a downside which we need to be aware of. Machine learning is increasing automation of whitecollar jobs so it could cost you your job. Some people say the nsa uses Machine Learning to spy on you. We dont know because what nsa is the secret. In the space allotted talked to the media is that Machine Learning me lead to the terminator evil robots taking over the world. Machine learning could be your best friend or your worst enemy depending on what you do with it which is why i think we are at the point where Everybody Needs to have a basic understanding of what Machine Learning is and what it does and thats why i wrote a book about it and thats what im going to talk about tonight. Its not that you need to understand the gory details about the rhythms. Its a little bit like driving a car. You need to understand how the engine works. You need to understand what to do the Steering Wheel and pedals and today Machine Learning as we saw so we need to think we as a society have a lot of decisions to make regarding Machine Learning and i think more so than in the past. So how is it that computers do this thing of learning from data . It may come as a surprise to you that thats even possible. Computers are supposed to be machines that we program them to do the same repetitive task over and over again and discover new knowledge is supposed to require a lot of intelligence and creativity. Picasso famously said computers are useless because they can only give us answers. Machine learning is what happens when the computer starts asking questions. And the question that a computer asks when its learning is how do i turn this input into this output . For example lets say the input is an xray, thats the input and what the computer is trying to learn is to say oh yes theres a tumor here or now there is no tumor here. And it does this by looking at a lot of data. Now in the old days right before there was Machine Learning this is how things worked. Data and algorithm went to the computer and out came the output. Like the data might be oppressed cancer example it could be an image of an xray image in the output might be the diagnosis. The algorithm is just a sequence of instructions that tells the computer exactly what it needs to do. Its like a recipe for meatloaf. How do you make meatloaf . Used use these ingredients and you combine it and you have meatloaf. And now gore has to be much more precise and detailed because computers are not. With Machine Learning, Machine Learning turns this around. In Machine Learning the output instead of counting out also goes in and let what the Machine Learning algorithm does is given the input and the output what is the algorithm that i need to turn the input into output . If this is the image of the press and this is the diagnosis have i get from one to the other . The amazing thing about Machine Learning is that when you Program Computers the traditional way it takes human programmers to do that and for every application you need for the program want to do credit scoring one to do diagnosis and want to play but the same learning algorithm can learn all sorts of different knowledge, all of these things that i just described that have been learned so the holy grail of machinery of whom i am one is to discover this master algorithm, and algorithm that makes other algorithms. A single algorithm by which you can discover all knowledge if you input the appropriate data. If you enter credit risk and if you get the data about Breast Cancer diagnosis it will look a Breast Cancer diagnosis and so on. Now what could this possibly be . Thats what most of the rest of my talk is going to be about and then i will talk about the consequences if we succeed in this enterprise. What is going to make possible . Part of what makes Machine Learning interesting is all of these algorithms actually come from very ingesting origins. They usually have their origins in fields of science. For each of these candidate master algorithms there is a set , school of thought is that the people in Machine Learning who pursue that. I call them the five kinds of Machine Learning and we will see what each one does. What ideas its built upon and with the algorithm does. We are going to do a run through a lot of different fields. What are the five tribes as a preview . There is the simplest door until my soul does try. They have their origins in philosophy and mathematics and their master algorithm is interested section so learning is induction and they think of induction as being the inverse. Then there are the connection is as ideas to reverse engineer the brain. The human brain is the best learning algorithm around so they try to implement the braynon computer. They think their inspirations from science. We will see the idea of mine that. Of evolutionary say no, now the greatest learning algorithm on earth is not the brain is evolution. Evolution made the brain and the rest of you and all of life on earth so its an amazing learning algorithm and its inspired evolution is called genetic programming. The mayors of vision is to have their origins and statistics and something famous called the base theorem and their master algorithm is and we will see with that is as well. Finally there are the analogize hers who have their roots in a lot of different fields but mainly in psychology. Their ideas about learning and some say of all intelligence works by and logical reasoning by looking at similarities and so on. Probably the most famous of the algorithms in this line of work is and we will see with those two. Was this that the symbol is. Here are three of the most prominent symbolists in the world. Tom Mitchell Carnegie mellon. Ross is actually an alumni. He was also in my teaching community. He was the first ph. D. In Computer Science that was awarded. What is the idea of the sub for . The idea of the symbolistss in some ways the most direct vision of this idea that we learn when scientists discover things by trying to fill gaps in our knowledge by formulating hypotheses and refining them in the basic idea is this notion of inverse deduction. This is the idea that how do we induced things . We induced things. Induction can be thought of being the inverse is deduction. Deduction goes from the general to the specific and deduction goes from the general to the specific. In the same way for example subtraction is the inverse of addition. The square root is the inverse of the square. For example addition gives is the answer to the question when i add two plus two what do i get . Attraction is the answer to the question what do i need to add to in order to give for . An exact parallel with that inverse deduction, deduction is how you from doing that is the human and humans are mortal. This is going from the general knowledge here about humans to the specific the inverse of that is saying if its in my data that is human and i also know that socrates is mortal but am i missing an order to be able to go from one to the other . Of course the answer to that is i need to know that humans are mortal. These rules are written in english and computers dont understand english at supper the computer the rules are written in formal logic. The general idea is the same. Then you can learn a bunch of these rules from different data and then you can do something that none of the other paradigms can do which is even combined rules in arbitrary ways to answer new questions to things that you have never seen before. This is the basic idea behind symbolism. Here is an example of what you can do with symbolic learning. Guess who the biologist in this picture is. Its not this guy here in the labrum. This is the Computer Scientist and they affect on the left is not a biologist either. The biologist is this machine. This machine is a robot that knows about molecular biology, knows about i need and its making discoveries about metabolism, diseases and so on and so forth. It was the whole process all by itself or it uses this method of inverse deduction to make hypotheses and literally carries out the experiments with dna microwaves in order to test those hypotheses and just like human scientist it takes these hypotheses and keeps going that way. Right now there are only two computers in the world. They call them at them and even to keith has discovered a new malaria drug that is now being tested. And once you have two of these computers you can make a lot of them. Then there are a Million People doing medical research so this is a very powerful thing to be able to do. Now the connectionist im not big fans of this approach. Its too abstract and too rigid and making the directions on paper. So lets see what the connectionists are doing. The most prominent is someone who started as a psychologist but gradually became a Computer Scientist. Ever since his ph. D. In the 70s is colon life has been to understand how the brain works and he has persisted through thick and thin and made a lot of progress although he has also seen a lot of failure. In fact he tells the story of coming home one day from work very excited and saying yes ive done it that i figured out how the brain works and his daughter says oh dad not again. [laughter] another famous connectionist. Its literally on the front pages of newspapers because of the success of tapping. Deep learning is the modern name of connection is him or Neural Networks as they are known. The idea of the connectionist is to emulate what is inspired by the human brain. How do we do this . How do we build learning algorithms based on how the brain works . The brain is made of neurons and neurons are these interesting cells that look kind of like trees. There is a cell body and the roots are called dendrites and then the trunk is called and asked him and the branches are more dendrites. Its a little bit like a forest with one big difference. The roots of one tree connect with the branches of others and then there are these electrical impulses like this big electrical storm. As you are sitting listening to this talk theres this electrical storm of impulses being fired on neurons. Basically what happens is the connections between the neurons might be stronger or weaker. The stronger the connection between two neurons for more like it the first neuron is able to make the second one fire. When they come onto electricity coming into a neurons exceeds a certain threshold that neuron and turn fires and that can make other neurons fire. The way we are going to do this with the computer is number one we need to build the simplest mathematical model we can up our neuron works and here come the inputs and these inputs good for example be the pixels on image of a for diagnosis or on the face and then what we do is we represent how strong the synopsis and then if this innocence the average of the input if it exceeds a threshold than the outcome is one meaning benaron aspiring and otherwise at zero meaning benaron is not firing. Now we take the semiconnected into a big network with layers. The name deep learning comes from the idea that these are not works that have one layer that feeds into the next later layer. But they probably have to solve is how did you learn those . This is a tricky problem because you assure image of a cat lets say and its supposed to be one because its a cat but its. 2. So it needs to go up. The question is how do we make it go up . This neuron at the output, there is a certain error. Now what we try to see is how much are these responsible and then we exchange them accordingly. Then you turn each of these so the result is a function of this from the previous layer. Those guys also have to change. The ones that are causing it to be too low maybe because theyre negative their weights need to go down, so that propagation as this is called a solving this problem. There is some evidence that some parts of the brain seem to work by this algorithm. When something goes wrong who do you plain . If this is giving the right answer than nothing needs to change but when its giving the wrong answer something needs to change and what that does is it propagates the errors that the network to figure out where in the brain if you will the synopsis needs to change. Deep learning is responsible for just in the last few years an amazing process and things like vision and speech Rec Commission translation etc. Due to this type of Machine Learning. For example this is how google does a search and its how these companies in essence these days do image understanding. The skype simultaneous translation system that you can talk live with somebody in a different language. They all use this type of learning. One very famous example that was on the first page of the New York Times is what has come to be known as the Google Network. The Google Network at the time was the Biggest Network ever built that it has on order from alien stimulus which is still small compared to your rain but humongous compared to they train this network on youtube videos. We sat there watching it for a long time. Maybe we should call it the couch potato network. Because i dont know if you know this but people like to upload pictures of their cats of a single concept this this network was a concept of cats is why it became known as the network but it also knows how to nice dogs and people and other things. The evolutionary have a different idea. The illusionary say literature training come from . You were just tweaking the synopsis on the brain but someone had to invent the brain. It was evolution that accomplish that. The big pioneer of evolutionary learning was john colin who died recently. For the first couple of decades people used to say evolutionary computing were his students in their students. And then in the 80s it took off. John koza and people did fun things with the evolutionary learning that we will see in just a second. What is the basic idea and evolutionary learning . Its to simulate on the computer the process of evolution in nature. We have a rough understanding of how pollution works. Ever since star one and because we know how genetics work. What happens in evolution and a systematic ways at any given point in time you up with population of individuals in each individual has a genome. Those individuals go out into the world and they have many offspring are they had a few. So the world develops then for fitness. Some individuals are not to have higher fitness. Maybe the giraffe has a longer neck and allows them to reach higher leaves and e them so the individual with the highest fitness in particular you crossover using reduction that genes of the two successful individuals. Then there is also the evolution that happens because the dna turns out to be good or mutations that happen. So after that happened you have a new generation of genomes and the process repeats again. Over time amazingly this process takes you from anna nepa to a human being. The evolutionary so they have done is they implement this on the computer. The its just bits because its on the computer so its dna but otherwise the process is in some ways similar. They have been able to do quite amazing thing so for example they been able to invent new circuits to do radio reception to invent a radio or to invent a low pass filter. These circuits often work better than the circuits designed by human inventors and impact they have made patents for the circuits. They were able to make a patent about intention. The more advanced type of genetic algorithm is what is called genetic for cramming. The idea in genetic programming is that well why do this with strings . This is a very lowlevel representation. The program is really a tree of operations like multiply. By the equivalent and we can build rich programs evolutionary leap. Except instead of being two strings you start with one string and then you crossover to the other. You do that with these probe rim of tree so for example if you take this point in this tree in this point in history for crossover than one of the spring will be the tree in wide and the other offspring will be the tree and black. The tree and why does one of the formulas for the duration is a function of its average distance to the sun. Its the constant times the square root of of the distance. You can actually discover this and complicated things using genetic programming. Now these days perhaps the most interesting and scary application of genetic algorithms is actually not to do this in the computer but to do it in the real physical world. There is this area of evolutionary robotics where you literally have these robots. These robots are performing whatever function you want them to perform, crawling around and doing things and of robust with the highest fitness get to program a 3d printer to create the next generation of robots. You start off with robots that dont do anything useful and finally have robots that are robust. If the leg malfunctions they can adapt and work without that leg. Traditionally probe rammed robots were much more so the next time you see a spider hopefully we will keep these robots under control. Things get a little more dangers once you have these things in a row world which brings up a whole host of other issues. The bayesians learning comes from statistics and for 200 years bayesians work persecuted minority so they had to get hardcore to survive and its good thing they did is they have a lot of and these days computer with data on the rise for statistics. Probably the most famous is judea pearl and two other famous art David Ackerman who is with Microsoft Research and mike jordan effort way. So what do bayesians do . Bayesians learning bayesians have this religious attachment. They think its a formidable run so well. For thembased dram is Machine Learning. In fact there was this Machine Learning startup that had based theorem made in neon letters in what is goggled the goop doing it . That may give you the basic intuition behind whats going on. The idea, the problem each of these tribes has a problem its focused on is trying to sell pay for the connection as it was the science problem and for the singleists life is uncertain. There is noise and ambiguity. You need knowledge that you induced that is never certain. It could have been that Something Else was so instead of choosing one hypothesis they can consider all possible hypotheses within the space at the same time. And what they do is they look at evidence and that changes how much you believe in each hypothesis. Based theorem is just a way of doing that updating how likely each high policy hypothesis is given. So you start out with the prior which makest very controversial. Its how much you believe in each hypothesis. What were your prior police were for you have observed things and then what you do is when you see data theres this thing called the likelihood which is if this hypothesis is true how likely am i to see this . Their idea which this year by statisticians number plate your hypothesis makes the world d. C. Than the more likely the world make sure hypothesis. Now what you do is you multiply the likelihood to obtain the how much you believe in your hypothesis after you have seen the data. As you see more data most hypotheses should be, mike reid and hopefully at the end of the day there is one hypothesis that you are pretty sure is true but in the meantime you might consider a whole bunch of hypotheses at the same time. Theres also this thing called the martin of probability. Doing this in practice can turn out to be very difficult because if you are considering a lot of hypotheses but competition will become expensive but they have come up with interesting applications. One that you are very likely familiar with his spam filter. The first and still probably most prevalent way to build spam filters is to use vision learning. What they bayesians when it does is try to decide whether its spam or not so the two hypotheses are spam or not spam. The prior is like before you even see your inbox how like leaves an email to be spam . 99 , 90 , take your pick in the evidence is in the email. If the concerns contains the word is more likely to be spam. If it contains the words your mom it probably isnt spam and if it contains the words are bossip probably isnt spam and you may want to look at it to be sure that they are and asking you to do something by tomorrow. Spam filters or vision learning. Another one is google. The algorithm it uses to choose is this huge Bayesian Network with millions of notes to predict whether you are going to click on that or not. The better it does its job the more money coukell makes and the happier you should be and a half to the advertiser should be. Finally lets talk about the analigizers. Reaching by analogy is extensive evidence that people do a lot. I need to solve a problem. Im going to look for similar problem in my memory and then sort of point the solution from that problem to this problem. If that patient has this is the simplest form of an illogical reasoning. Until deep learning came along this was the most important important turning and Douglas Hofstadter is a cognitive scientist and hes the one who coined the term analogizers. His most recent book is 700 pages arguing that everything we do not just learning but all intelligent thinking is just analogy. He very much believes an illogical reasoning is the master algorithm. So how does this work sex with me propose a simple puzzle to illustrate how this works. We are going to look at the simplest machine algorithm varies in and probably the best algorithm there is in all of creation. He was invented many years ago but its still good. Heres my puzzle to you. Lets suppose i give you a map of two countries. The main if shameca plus size and the other with a negative sign. My posit to you is just from the locations of the cities can you tell the where the border between the two countries this . Probably a good its a following. Any point on the map that is closer to a city into a minus city is probably part. So the border between the two consists of all the points at exactly the same distance from a positive city in the negative city. This is just reasoning by similarity. The ones that are more similar are negative. This instead of being a map could be in more dimensions images. You can use this for recognizing faces and all sorts of different things. If you look at this you will say well this borders a little bit jagged. Looks a little bit artificial. Too many Straight Lines and sharp corners. Another problem is if you think about it you could actually throw away a lot of these examples and they will stay in the same place. If you threw away this guy territory is won by two and now in this puzzle that doesnt matter. If you are looking at the data and you have examples and having to match them if they have the expense. So machines solve both of those problems. They only keep the support vectors. To support vectors are the examples required and then it also learns by doing the following exercise. You want to walk with always the positive series on a left in a positive series and youre on your right but always stay as far away as possible from all of the cities. You want to give them a wide berth. Think of them as landmines. Walk across nomans land was the thing far from any land mine. You want to maximize your margin of safety. This is what colonel machines do. Colonel by the way is a measure of similarity. The most important thing is the sloan miller similarity between two objects. That may give you one example of how a analogizer methods are at work and this example you are familiar with. These people use all sorts of algorithms to recommend products but one of the most powerful ones is this type of analogybased reasoning. If i want to figure out whether you will like a certain movie one of the best things they can do is look or people with similar tastes to yours. I look at the movies that they give a lot of stars two and if those are the movies youve give stars two in the movies that they didnt like are also movies that you didnt like they seem to have similar tastes and they assume youll like this well. This idea turns out to work extraordinarily well. In fact i have seen it in several places that a third of amazons business comes from its system than threequarters of the movies on netflix that people watch come from the system. It is of course using more sophisticated algorithms that this is what i need. These are the five tribes. Its quite interesting. They are solving these different problems, the problem being able to reason in similarities and they have been successful at what they do. Here is the big picture. We have the five tribes. Each one of them as a problem that it has worked on for a long time and that assaults very well with its master algorithm. For example this could discover and elegies in objection. Each type leaves they have the best. Some connectionists say this is all we need. We will ace logic and reasoning and well be done. I dont think thats the case. I think what we really need is a single algorithm that solves all of these five problems because all of them are real. If you have an algorithm that solves one side of the problem it doesnt solve the other problems. Where does that come from in your artificial brain . The evolutionary smell how to answer that russian but they dont know how to so what we need and what a number of us have been working on in the last decade or so is the grand learning. A single algorithm that has the capabilities of all of these. For example the standard model is the central dogma we are looking for the single learning algorithm that will be of a solve all these problems at once and a while learning that they could possibly do as a result. So what might that algorithm look like . We havent found it yet. But we have made a lot of progress. He worrier right now. What is won by in which you might go about creating such an algorithm plex heres a very important thing to notice. These algorithms from different tribes look superficially very different. They could be very different but they actually all have the same structure all of these learning algorithms that have been shown here are composed of three things. The first thing is representation. Its the choice of programming language in which the learning algorithm is going to express what this line. Its not java or c because those are too complicated and then make things difficult. It might be things like Vision Networks which i mentioned. One way or another you have to use a representation and the natural first step in unifying these paradigms is to invent the representation that has the power of logic and the power of Vision Networks are more generally the graphic that visionaries use. We have developed this representation which is a combination of logic and the typographical network. In essence what it does is you have formulas and logic that you can think think of is the analogue of sentences in english those formulas have weights. Of a formula has a highway that means you believe in it and if it is false though world becomes a lot less like we so how would the world is the is a function of high probability formulas are are so this is indeed a powerful representation. The next part these learning algorithms has is what is called the evaluation function. The evaluation function will answer the following question. If you give me the algorithm how good is it . If you give me an algorithm for diagnosing Breast Cancer i have to figure out how good it is. Typically we score algorithms by how good they are. How often do they give the right answer but also things like how small are they in general. All of these things can be quoted in a possibility that we saw. I prefer a simple hypothesis. More generally this should be a part of the algorithm. The evaluation should be what you care about. For example if your company and want to maximize your profits and you want to here theres a natural combination of genetic programs in. I need to discover the formulas picked with ecological formulas that are chilled out the world . I can discover those using genetic programs and because logic is a programming language and want to discover the formulas that way the weights for the formulas they can use that propagation. This is our current best shot at the master algorithm. Its still far from the answer because there are still many things that it doesnt do and in fact my suspicion is the real master algorithm requires ideas that we havent had yet. There are these ideas are talked about and with almost figure out how to combine them all but my feeling is there important things we havent discovered yet ironically i think the experts are not in the best position to come up with those ideas. They tend to be wedded to their own paradigm. They have a hard time seeing outside of them. But the reason i wrote this book is to let other people find out about Machine Learning and start thinking about it and have their own ideas. If you crack the master algorithm let me know so i can publish it. Let me just conclude by talking a little bit about what i think the master algorithm will make possible. There are a lot of important problems in the world and things we have been working on for a long time and it gets not clear that each of these things we cannot solve without Machine Learning but also these are problems that involve each one of those issues that we have been talking about. No single one will solve them. An example of that is we would all like to have robots cooking for us in bringing us breakfast in making our beds. If you go back to the five problems we talked about using credit assignment and combining knowledge and reasoning by analogy a robot in your home has to face instances of each one of these. The robot needs to be able to think on its feet. With the master algorithm we will be able to have robots that can do all the things you need them to do. We are starting to see the robots in the home running around that will generate the data. Heres another one. All of the Major Tech Companies have a project to read the weapon turned into a knowledgebased that the computers can reason with. In the future instead of issuing keywords in getting back a bunch of pages what you do is they ask questions and get the answers to those questions. Probably the most famous of that is googles knowledge gap. Their efforts like that in academia. In order to be able to do this there is no way to do without Machine Learning but for example you are dealing with a symbolic representation that logic provides but also text is messy and full of them to do so you need to probability that if you look at it you need the five things we talked about. This is the number another thing that will not happen. Companies are putting resources into but they depend on the progress of Machine Learning to solve it. Heres perhaps the most important one of all of them which is cancer. The reason cancer is a hard problem is that cancer is not a single disease. Everybodys cancer is different. The same cancer changes and mutates as it grows. Somebodys cancer today is not the same cancer they had six months ago. Its very that therell be a single drug that cures all cancers. The cure for cancer is a learning program that takes in the genome of the tumor like the mutations that pass, the patients genome for patients medical history and other relative information based on that it predicts this is the drug you need for your kids. Or the combination or sequence of drugs over time or a new drug thats designed specifically for that patients problem. Again to dairies learning algorithms are not able to do this. We need to model what the cell was doing and wanted to learn the parameters etc. You need to reason by analogy and so on until ford. Once we have the algorithm patients donate their data based on that data we will be able to learn. There is a large effort led by people in santa cruz and others trying to collect this patient data in order to learn the programs. Basically what they believe is if patients do share that data then we will at some point be able to cure most cancers. They dont they probably wont so we need both the learning algorithms in the data from the patients in order to do this. Let me mention one last thing. In some ways the one that is most like to affect you in your everyday life which is going down to these days where you have is a system that recommends music for you based on what you have done with netflix. Amazon recommends products based on amazon and there are penciling of these things. But all these companies would like to have and what theyre they are doing right now is trying to learn about you please trying to learn that 360degree picture view, who who you are and what your tastes are and what you do so at every stage of your life forever decision you have to make whether its picking a movie to see our house to buy, this model if you will then be able to put postings for you and help you go from 1 million options to just a few to look at. Again in order to be able to bring that kind of model we need more powerful learning algorithms we have today. Once you have that it will be more dispensable two then your smartphone is today. The information problem we have today that model will basically help you do away with that information overload. Your model will interact with my model to figure out what we are going to. Your model is going to, you press the fight be a job mutton and jar model interface for possible jobs and so your model is working with models the company and it comes back and says here are the 10 most promising jobs. You say i want to go on a date in your model goes on 1000 dates with a million different people and comes back and says here at the people that are the most promising dates for you. There is a lot of impact Machine Learning has hardly had but using both the data that is being produced and hopefully the better learning algorithms we should be able to come with come up with google gets a lot of these things. Thank you. [applause] there are microphones on either side of the bench so please go to the microphone if you have a question. Please keep your questions in the form of a question and concise so we can get as many as they possibly can in. Would the her thoughts on decomposition and the master algorithm . Decompensation meaning . Decomposition as far as using firstorder logic. The symbol is believed that composition is essential. You need to decompose sure problems and solve each one and combine them all together. The connectionists theres one paradigm paradigm that handles that well and the other ones not so much. Its not just for things like problem solving. Its for things like understanding language. Language is very compositional. The reason language is so powerful it is i can combine them in infinite number of sentences. So you will probably like the symbol is. Some like elon musk and steven hawking have worried about progress with Artificial Intelligence they have this other camp is waiting for it and i wondered if you had an opinion on what we need to do in terms of if it needs to be regulation or if you fall into one of those camps . This is very much of prominent question. Its literally all over the media and their utopians like elon musk. I think the problem that people like elon musk and steven hawking have worried about, i think thats very farfetched. I dont know too many experts to take it seriously. The reason i think people have those thoughts and this is natural, we confuse being intelligent with being human. They think if we make supreme intelligence that will have the same desires and rose to be do and that it will squash us but the truth is they could actually be infinitely intelligent and provided they only get to solve the problems we give them. We set the goals and they figure out how to achieve those goals. It for example curing cancer when it be good to have an intelligent program working on cancer . Provided we dont do anything dumb and humans are not above doing completely dumb things, providing we have safeguards like that one of the reasons Everybody Needs to have a rough understanding is as with Everything Technology has the potential for good and bad. The people who master it will be the ones who technology serves. All these companies are Building Models which serve you but partly they serve their goals. This is a little bit dangerous. You want to take control and make sure you are part of the conversation and the models are doing what you want them to do. There are other dangers like for example jobs. I think there are a lot of jobs that need to be automated and more of them will be. We all need to think about how does my job relate to machinery . Is a something that machines could do easily . If it is we need to start doing the things that the machines cannot do. One of the things the machines cannot do is things like common sense. Machines dont have common sense. And they wont have it for a long time. Jobs that require integrating a lot of information from a lot of different sources understanding the context. We need to be wary of that and i think people, i dont think its a question that in the shortterm a lot of jobs will appear. Right now im in it is 5 . People need to be able to retrain themselves to do the new kinds of jobs in leaf behind the ones that bother me. I dont think this is going to happen but if we do get to the point where computers and robots can let better than people then there will be no more jobs but that will be a good thing because people just wont have to work. They will find meaning in life in other ways. The world will produce the things we need but there are questions like how is that wealth to be shared . If its going to be 1000 Gillian Ayres and everybody else starving that wont be such a happy outcome. That is why in a democracy our boat is very important when we need to be aware of these things

© 2024 Vimarsana

comparemela.com © 2020. All Rights Reserved.