So that is our 0 dot com and watch us live by clicking on the live icon. Ah, one of the top stories on al jazeera, after the deadliest escalation and violence since last years warn garza, a ceasefire between israel and the Palestinian Group is limited. She hand has been reached is due to coming to force in an hours time. At 2030 g. M. T. 41. 00 palestinians, including 15 children have been killed since the fighting began on friday. After israel launched what it described as preemptive strikes on garza, it claimed it was only targeting members of islam. She had the group as retaliated with hundreds of rockets fired across the border. But most have been intercepted. Natasha name has more from western reserve in there of course was a lingering concern that if the operation continued and the death toll rose in gaza, that perhaps hamas might be, it might be tempted to enter the fray. That is not something that repeats government has said that they wanted. They have said from the beginning that this operation was to target is slammer. She had to force an attack planned attack against israel, and that there was no appetite for a larger protracted battle. Ukraine is accused russia of again, shelley, europes Largest Nuclear power plant in the florida. Keys says Russian Forces damaged 3 radiation sensors on saturday night and his one worker was injured. In his efforts, your plant was captured by Russian Forces in march. It is still run by ukrainian technicians. Un atomic observe is worn military activity at the site. Could cause a nuclear disaster. The 1st grain ship to leave ukraine since russia invaded to is experienced a delay. And theres not docked in lebanon as planned. Eva zuni left a desk last monday, carrying tens of thousands of tons of corn. It was due in tripoli by sunday and its not clear why it hasnt arrived. Today comes as for more color ships, sail out of ukrainian port in the backseat Fire Fighting teams from mexico and venezuela helping cuba battle flames that have been off the Major Oil Storage Facility oil tanks at the facility in miss tons us have been burning since he was struck by lightning on friday. These one cuban firefighter has died. Next, its talked router 0. I have more news for you straight after that. Thanks for watching my finance. Ah ah. The history of humanity has been marked by its technological advances. The discovery of fire 2000000 years ago. The invention of the wheel and 3500 b. C. All the way to the Industrial Revolution in the 18th century. Throughout the times weve thought to make our lives easier. Though many argue some of those advancements have proven destructive in modern times, our ambition for a better life has taken us to the age of information technology, programming and Artificial Intelligence. A, i give machines the ability to do more. They can think for themselves, learn our preferences and behaviors. Communicate with us, suggest alternatives, and even do things only humans once did. Alexa, order, white dog a, i had slowly become an essential part of our life thats use in social media, has brought us closer with our families and friends. And its proven valuable at home and at work. But some say theres another more sinister side to Artificial Intelligence, e, p, o, p, an american computer scientists to meet gabriel has been one of the most critical voices against the unethical use of ai. Shes been vocal about issues around bias inclusion, diversity, fairness and responsibility. Within the digital space, google asked her to co lead its unit focused on ethical, Artificial Intelligence. But 6 weeks later, the pick giant fired her off. She criticized the companys lucrative a. I work gab ruth, considered one of the 100 most influential people of 2022 by Time Magazine has no most an independent Research Institute focused on the homes of ai, on marginalized groups. So whos behind Artificial Intelligence, technology, and whose interest does it serve . And with such a big influence in our lives, how democratic is it to use . Computer scientists to meet group talks to al jazeera to meet their brute. Thank you for talking to al jazeera. So to start with, it saw right at the start and just sit the scene a little bit for people who might not think about a i in everyday life. As the Technology Stands right now, how much are we using ai in every day to day life . How imbedded is it right now for most people. I dont blame the public for being confused about what it is. I think that many researchers like myself, who have gotten our ph days from a i who have studying a i are also confused. Because the const, the conception of and that we see in pop culture is in my view, what really, really shapes Public Opinion about what it is. And so it kind of makes me realize that pop culture has this huge power to shape peoples thinking, right . So i think when people think of i, they are thinking about terminator kind of things. These robots that are kind of human like and, or are going to destroy the world, or are you know, or either good or bad, Something Like that. But thats really not what is being branded as, quote unquote, a i right now, anything you can think of that has any sort of data processed and makes any sort of predictions about people what she tries of off calls surveillance. Capitalism is based on what is currently being branded as and any sort of chat bought that you use, for instance on whether it is alexa or siri or i guess these are voice assistance or chat, thoughts that a lot of Companies Use because they want to hire less Call Center Operators or things like that. There can be some sort of clinical and behind it. There is a lot of surveillance on in day to day life, whether it is face recognition or other kinds of tracking that go on. And that, that has some sort of a, i in it, there is recommendation engines that, that we, you, that we might not even know exist when were watching, you know, videos on tick tock or Something Like that, or advertise targeted advertisement that we get or music selections that tried it and for what kind of music we want to listen to next, based on what we listen to before. So its a very broad kind of branding, and it wasnt always the case, but i think that, you know, theres always the language to sure that people use in order to kind of sell more products or hype hype up to many of their products, in my opinion, so that is currently in my view, what is being branded as Artificial Intelligence. Thats really interesting because i guess when you think about using like even face recognition or getting a playlist recommended to you as you say. I mean, i dont think about that being a i, im just like opening my phone. I guess that something, you know, people thinking about it as they use it or is this just, i guess, going under the radar as, as just the future or what it means to use technology. Its very interesting because there, in my opinion, there is this deliberate rebranding of Artificial Intelligence thats going on so that people are confused by the capabilities of the systems that are being billed as, quote, unquote Artificial Intelligence. Right. So for instance, we even see these papers that say that computers can now identify skin cancer with super Human Performance there, and theyre better than humans doing this, which is really not true. Right . So scientists themselves are engaging in this kind of hype and corporations themselves or engaging this kind of hide. And what that does is, instead of people thinking about a product that is created by human beings, whether theyre in corporations or Government Agencies or military agencies like defense contractors, right . Creating autonomous weapons and drones. So instead of thinking about people creating artifacts that we are then using, we think about quote unquote a i as this, you know, some being that has its own agency. So what we do then is we dont ascribe the issues that we see to the people or corporations that are creating harmful products. We start do really the conversation talking about whether you can create a moral being or you can impart your values into air or whatever. Because now we are kind of ascribing this responsibility away from the creators of these artifacts like machines, right . To some sort of, you know, being that we are telling people have their own agency. Ok. So thats what is, what go you into your line of work. The ethics of Artificial Intelligence because it hasnt always been easy path. That seems initially i was just interested in the technical details. Face recognition is a, is something that is done under the Computer Vision umbrella or any other kind of thing that tries to make sense of images that seemed really cool, that you could infer certain things based on radios and images. And that was what i was interested in. However, there was a confluence of a number of things. So 1st of all, when i went to graduate school, i stanford, i saw this dark, the lack of any black person from anywhere in the world, in graduate school, and especially in, with respect to Artificial Intelligence, developing or researching the systems. So when i, when i was at stanford by and then i heard that they had literally only graduated one black person with a patient in Computer Science ever since the inception of the Computer Science department. And you can imagine the type of influence that this school has had on the world, right . You can imagine the kinds of companies that came out of there, including google. So i, that was just such a shock to me. So i saw not only the lack of black people in the field, but also the lack of understanding of kind of what we go through and what systems of discrimination. We go through in the u. S. And globally, really around the same time. I also started reading about systems that were being sold to the public and being used in very, very kind of scary way. So for instance, there was a public article that showed that there was a company that purported to determine the likelihood of people committing a crime again. And unsurprisingly, of course, it was heavily biased against black people. At the same time, you know, i see the kind of drill is a systems purporting to determine whether somebodys a terrorist or not, etc. And my own Life Experience tell, told me, you know, who would be most likely to be home by those systems. And who would be most likely to be developing those kinds of systems . Right . So that was the time where i started pivoting, from purely studying how to develop the systems and doing research on the technical aspects of the field. To being very worried about the way in which the systems are being developed and who they are negatively impacting, learning about the proposal of the existence of an algorithm model that purports to printing someones likelihood of committing a crime again with such a huge shock for me and by then it had existed for a long time. And in addition to to that, you know, and so this system judges used for sentencing for, for setting bale along with other inputs. And there are other systems other predictive policing systems. So one example of predictive policing system was something called prep poll that actually a la police lapd were, were using and things to a lot of activism from groups like stop lapd spying on this software. Stop being used by l. A. P. Actually my people in my field statistician, chris, dan, long and scientists. Well, William Isaac did a study that actually reverse engineered credit pool and showed that im surprisingly, it pin points, neighborhoods with black and brown people and says that these neighborhoods are crime hot spots right. For a says, drug use isnt one example. If you look at the National Survey for drug use isnt pretty evenly distributed in for instance oakland. Right. But the predictive policing systems, like print poll, they instead kind of a pin point black and brown with neighborhoods, saying that these are hot spots. And why is that . Well, the list of new history and the current realities of us. Were not surprised by that because these systems they feed in, they have Training Data that are labeled and the Training Data does not depend on who commits a crime. It depends on who was arrested for committing a crime. And obviously thats going to be biased. I want to come back to the issues around the dos that you put into a i and, and what the results that you get in a minute. But lets go back to when you were hired at google, what was it that you were hired to do . I was hired to do the kind of work that im talking about with respect to analyzing the negative societal impacts of ai and working on all aspects of mitigating that whether it is technical or Non Technical or documentation. So i was a Research Scientist with, you know, the freedom to set my own research agenda. And i was also co lead of the ethically, i team with my former cozy mitchell. And so our job there was also to create that. 7 agenda of our Small Research team, which is again focused on minimizing the negative societal impact of a i. And as you say, theres a lack of diversity in the industry. You knew that, you know, if you know that since you got into this. So what were the realities that of, of going into this, this mega, huge company as a woman of color, trying to do that job . It was incredibly difficult from day one. I faced a lot of discrimination. Whether its texas, im orations, im from day one. I tried to raise the issues, but it was exhausting. You know, my, my colleague mitchell and i were just so exhausted and doing research was basically, you know, working on our paper is and discussing ideas felt like such a luxury because we were just always so exhausted. By all the other issues that we were dealing with, you eventually put out a paper which led to your being dismissed, but google says you resigned. But put that to the side this, this paper looks at the bias is being built into a i machines basically reflecting the mistakes that humanity has made, is perpetuating history. A foregone conclusion. When we, when we talk about ai or is there another pause . I always believe that we have to believe that there is another path. And this comes to back to the way in which we discussed as just being its own thing rather than an artifact, a tool that is created by human beings, better in corporations or an educational facilities or other institutions, right . So as long as we have to know that we control what we build and for, and when we build it for and what its used for. So there is definitely another path. But for, for that other path to exist, we have to uncover the issues with the current path that were going on and remedy them and also invest in terms of research in those other paths. So for instance, on this paper that i put out called on the dangers of the cast, the parents, it talks about this huge race that is going on right now on developing what are called large language models. And so these models are, are trained on massive amounts of data from the internet straight from the it, right. And so you and i are not getting paid for the content that we put on the internet that, that is being scraped to train these models, something just to just to make it really simple. I mean, something that i hadnt even considered what youre talking about is, you know, giving a, i all of the information of the internet. And of course, its going to, you know, spew out some of the, the worst parts of the internet. Which are, you know, often predominant, but if we give it a small, a dos it or if we q rate the data, then were going to get something that might be more helpful for people. Is that kind of putting it to simply actually that is one of you know, we discussed so many different kinds of issues in our paper. And one of the issues we discussed is exactly what you mentioned in terms of curating data and using, you know, large im curious data from the internet with the assumption that size gives you diversity, right . And so what we say a site does not give you diversity and we detail so many ways in which thats not the case. And one of the suggestions that we make is to make sure that we carry our data and we understand our data. And we believe that the dataset that were using to train these model is too large, too daunting to overwhelming for us to understand it documented and curated, then that means we shouldnt be using that data. Right. And so. 8 this is, this is kind of one of the things that were talking about. Another thing that i thought was really fascinating that i guess we dont consider in our daily lives is that at a macro level, the funding for a lot of the technological advances that trickle down to us begin either with the military or with these massive tech giants that, you know, can they have our best interests at heart . This is precisely what i talk about to with the found in our new, my name is to and distributed and Research Institute, right . So a lot of when you look at history of whether it is things like Machine Translation or self driving cars, right. So self driving cars are good example. They were very much funded by dar paw and defense funding agency. Right. So its not because theyre interested in, on accessibility for, for disabled people, right . If theyre interested primarily, and autonomous warfare. So how can we assume that something that starts with that goal and that funding insight will end up being something different. And so i often give this example, you know, when people talk about ad for social good, right . Th