Theaters, the philippine those critics will be hoping to prevail this time. Why dont you hear me . I hello there. This is al jazeera and these are the top stories. Israel has carried out a wave of air strikes and gauze and armed fighters of responded provoking phase of a full scale conflict. How the thing is, i mean jihad says its 500. 00 rockets, it israeli cities including television. Israel says its under a missile system, has intercepted them. Air raid sirens have been hard and southern and Central Israel on the fest, israeli strike to high rise tower and gaza city. One of those killed was ties the algebra, a commander of the could brigades the military of his vomit. She had at least 10 others had also died, including a 5 year old gone. Dozens of others have been injured in the strike. What is this childs fault . She was dreaming of going to kindergarten and asked her father for a school bag and clothes. What has she done wrong . This innocent child, Taiwan Defense ministry, says troops have fired flares to warn off drones and other identified aircraft flying over to outlying islands. Meanwhile, chinas foreign minister one v has defended the military drills in the taiwan straits, saying that they are in line with international law. Beijing loans to those exercises after us. How speaking nancy pelosi visited the island. A indiana has become the fast us state to ban abortion. After the Supreme Court overturned the landmark roe v wade, runing the decision comes into effect on september 15th with some exceptions, and means women in indiana can only get an abortion in cases of rape and incest. When the Mothers Health is at risk, abortion was previously allowed up to 20 weeks into pregnancy. A jury and texas has ordered conspiracy theorists, alex drones to pay 45200000. 00 turns felicity claimed the Sandy Hook School shooting was a hoax. The info was the host, has also been ordered to be to pay more than 4000000. 00 and compensation to the parents of a child killed in that shooting in 2012. Jesse louis, his parents had comments by joins letty, years of Emotional Distress and harassment including death threats. While those are the headlines next its talk to al jazeera, the latest news as it breaks. The country will work on monday, one or controversial constitutional rep around them, which is widely expected to grow from chrysler or toward with detailed coverage. The cost of fertilizer has more than doubled the season. Thats largely due to the warren you crave. From around the world, all these cows are infected with put and belts disease, they are dairy cattle, but then milk production has dropped by more than 70 percent lose. The history of humanity has been marked by its technological advances. The discovery of fire 2000000 years ago. The invention of the wheel in 3500 b. C. All the way to the Industrial Revolution in the 18th century. Throughout the times we have sought to make our lives easier. Though many argue some of those advancements have proven destructive in modern times, our ambition for a better life has taken us to the age of information technology, programming and Artificial Intelligence. Ai gives machines the ability to do more. They can think for themselves, learn our preferences and behaviors. Communicate with us, suggest alternatives, and even do things. Only humans once did. I like the order . Why does a i had slowly become an essential part of our life. Its using social media has brought us closer with our families and friends, and its proven valuable at home and at work. But some say theres another more sinister side to Artificial Intelligence, e p. O be an american Computer Scientist to meet gabriel has been one of the most critical voices against the unethical use of ai. Shes been vocal about issues around bias inclusion, diversity, fairness and responsibility. Within the digital space, google asked her to co lead its unit focused on ethical, Artificial Intelligence. But 6 weeks later, the pick giant fired her after she criticized the companys lucrative a. I work gab ruth, considered one of the 100 most influential people of 2022 by Time Magazine has now launched an independent Research Institute focused on the hands of a i on marginalized groups. So whos behind Artificial Intelligence technology, and whose interest does it serve . And with such a big influence in our lives, how demo a Computer Scientist to meet group talks to al jazeera, to meet their brute. Thank you for talking to l 0. So to start with saw right at the start and just sit the scene a little bit for people who might not think about a i in everyday life. As the Technology Stands right now, how much are we using ai in every day to day life . How imbedded is it right now for most people. I dont blame the public for being confused about what it is. I think that many researchers like myself, who have gotten our ph days from a i who have studying a i are also confused because the const, the conception of and that we see in pop culture is in my view, what really, really shapes Public Opinion about what it is and so it kind of makes me realize that pop culture has this huge power to shape peoples thinking. Right. So i think when people think of i, theyre thinking about terminator kind of things. These robots that are kind of human, like on and or are going to destroy the world, or are, you know, or either good or bad, Something Like that. But thats really not what is being branded as, quote unquote, a i right now. Anything you can think of that has any sort of data processed and makes any sort of predictions about people. Whats your size of off calls, surveillance. Capitalism is based on what is currently being branded as, as any sort of chat bought that you use, for instance, on whether it is a lex saw or siri or i guess these are voice assistance or chat bots that a lot of Companies Use because they want to hire less Call Center Operators or things like that, there could be some sort of clinical and behind it. There is a lot of surveillance on in day to day life, whether it is face recognition or other kinds of tracking that go on. And that, that has some sort of a, i in it, there is recommendation engines that, that we, you, that we might not even know exist when were watching, you know, videos on tick tock or Something Like that, or advertise targeted advertisement that we get or music selections that tried it and for what kind of news if we want to listen to next based on what we listen to before. So its a very broad kind of branding, and it wasnt always the case, but i think that, you know, theres always the language to sure that people use in order to kind of sell more products or hype hype up to many of their products, in my opinion, so that is currently in my view, what is being branded as Artificial Intelligence. Thats really interesting because i guess when you think about using like even face recognition or getting a playlist recommended to you as you say. I mean, i dont think about that being a i, im just like opening my phone, i guess thats something you know, people thinking about it as they use it or is this just, i guess going under the radar as, as just the future of what it means to use technology, its very interesting because theyre, in my opinion, there is this deliberate rebranding of Artificial Intelligence thats going on so that people are confused by the capabilities of the systems that are being billed as, quote, unquote Artificial Intelligence. Right. So for instance, we even see these papers that say that computers can now identify skin cancer with super Human Performance there, and theyre better than humans doing this, which is really not true. Right . So scientists themselves are engaging in this kind of hype and corporations on themselves. Or engaging this kind of hide. And what that does is instead of people thinking about a product that is created by human beings, whether theyre in corporations or Government Agencies or military agencies like defense contractors, right . Creating autonomous weapons and drones. So instead of thinking about people creating artifacts that we are then using, we think about quote unquote, as this, you know, some being that has its own agency. So what we do then is we dont as scribe, the issues that we see to the people or corporations that are creating harmful products. We start derailing the conversation and talking about whether you can create a moral being or you can impart your values into air or whatever. Because now we are kind of ascribing this responsibility away from the creators of these artifacts like machines, right . To some sort of, you know, being that we are telling people have their own agency. Ok. So thats what is, what go you into your line of work. The ethics of Artificial Intelligence because it hasnt always been an easy path, its initially, i was just interested in the technical details. Face recognition is a, is something that is done under the Computer Vision umbrella or any other kind of thing that tries to make sense of images that seemed really cool, that you could infer certain things based on videos and images. And that was what i was interested in. However, there was a confluence of a number of things. So 1st of all, when i went to graduate school at stanford, i saw this dark, the lack of any black person from anywhere in the world, in graduate school, and especially in, with respect to Artificial Intelligence, developing or researching the systems. So when i, when i was at stanford by then i heard that they had literally only graduated one black person with a patient in Computer Science ever since the inception of the Computer Science department. And you can imagine the type of influence that this school has had on the world, right . You can imagine the kinds of companies that came out of there, including google. So i, that was just such a shock to me. So i saw not only the lack of black people in the field, but also the lack of understanding of kind of what we go through and what systems of discrimination. We go through in the u. S. And globally, really around the same time. I also started reading about systems that were being sold to the public and being used in very, very kind of scary way. So for instance, there was a public article that showed that there was a company that purported to determine the likelihood of People Committee a crime again. And unsurprisingly, of course, it was heavily biased against black people. Im time, you know, i see the kind of drill is a systems purporting to determine whether somebodys a terrorist or not, etc. And my own Life Experience tell, told me, you know, who would be most likely to be harmed by those systems. And who would be most likely to be developing those kinds of systems, right . So that was the time where i started pivoting, from purely studying how to develop the systems and doing research on the technical aspects of this field. To being very worried about the way in which the systems are being developed and who they are negatively impacting, learning about the problem, the existence of an algorithm, a model that purports to predict someones likelihood of committee a crime again with such a huge shock for me and by then it had existed for a long time. And in addition to to that, you know, and so this system judges used for sentencing for, for setting bale along with other inputs. And there are other systems other predictive policing systems. So one example of predictive policing system was something called prep poll that actually a la police lapd were, were using and things to a lot of activism from groups like stop lapd spy. This software stop being used by l. A. P. Actually my people in my field statistician, chris dan, long and scientists are well, William Isaac did a study that actually reverse engineered pretty pole and showed that im surprisingly, it pinpoints neighborhoods with black and brown people and says that these neighborhoods are high hot spots, right. Even if you a drug use isnt one example. If you look at the National Survey for drug use isnt pretty evenly distributed in for instance oakland. Right. But the predictive policing systems, like print poll, they instead i kind of pinpoint black and brown with neighborhoods, saying that these are hot spots. And why is that . Well, the list of new history and the current reality of us. Were not surprised by that because these systems they feed in, they have Training Data that are labeled and the Training Data does not depend on who commits a crime. It depends on who was arrested for committing a crime. And obviously thats going to be biased. I want to come back to the issues around the dos that you put into a i and, and what the results that you get in a minute. But lets go back to when you were hired at google, what was it that you were hired to do . I was hired to do the kind of work that im talking about with respect to analyzing the negative societal impacts of a i and working on all aspects of mitigating that whether it is technical or Non Technical or documentation. So i was a Research Scientist with, you know, the freedom to set my own research agenda. And i was also co lead of the ethically, i team with my former cozy mitchell. And so our job there was also to create, to set the. 7 agenda of our Small Research team, which is again focused on minimizing the negative societal impact of a i. And as you say, theres a lack of diversity in the industry. You knew that, you know, if you know that since you got into this. So what with a reality is that of, of going into this mega, huge company as a woman of cala trying to do that job. It was incredibly difficult from day one. I faced a lot of discrimination whether its sex arranged as im from day one. I tried to raise the issues, but it was exhausting. You know, my, my colleague mitchell and i were just so exhausted and doing research was basically, you know, working on our papers and discussing ideas. Felt like such a luxury because we were just always so exhausted by all the other issues that we were dealing with. You eventually put out a paper which led to your being dismissed or google says you resigned. But put that to the side this, this paper looks at the bias is being built into a i machines basically reflecting the mistakes that humanity has made, is perpetuating history. A foregone conclusion. When we, when we talk about ai or is there another path, i always believed that we have to believe that there is another path. And this comes to back to the way in which we discussed as just being its own thing rather than an artifact, a tool that is created by human beings, better in corporations or an educational facilities or other institutions, right . So as long as we have to know that we control what we build and for, and when we build it for and what its used for. So there is definitely another path. But for that other path to exist, we have to uncover the issues with the current path that were going on and remedy them and also invest in terms of research in those other paths. So for instance, on this paper that i put out called on the dangers of the cast, the parents, it talks about this huge race that is going on right now on developing what are called large language models. And so these models are, are trained on massive amounts of data from the internet straight from the internet. Right . And so you and i are not getting paid for the content that we put out on the internet that, that is being switched to train these models, something just just to make it really simple. I mean, somebody that i hadnt even considered what youre talking about is, you know, giving a, i all of the information of the internet. And of course, its going to, you know, spew out some of the, the worst parts of the internet, which are, you know, often predominant. But if we give it a small, the dos it, or if we q rate the data, then were going to get something that might be more helpful for people. Is that kind of putting it to simply actually that is one of, you know, we discussed so many different kinds of issues in our paper. And one of the issues we discussed is exactly what you mentioned in terms of curating data and using, you know, large im curious data from the internet with the assumption that size gives you diversity, right . And so what we say is site does not give you diversity and we detail so many ways in which thats not the case. And one of the suggestions that we make is to make sure that we carry our data and we understand our data. And we believe that the data set that were using to train these models is too large, too daunting to overwhelming for us to understand it documented and curated, then that means we shouldnt be using that data, right . And so this is, this is kind of one of the things that were talking about. Another thing that i thought was really fascinating that i guess we dont consider on our daily lives. Is that at a macro level, the funding for a lot of the technological advances that trickle down to us begin either with the military or with these massive tech giants that you know, can they have our best interests at heart . This is precisely what i talk about to with the found in our new way name institute, that distributed and research. Right. So a lot of when you look at history of whether it is things like Machine Translation or self driving cars, right . So self driving cars are going to example, they