comparemela.com

So that is our 0 dot com and watch us live by clicking on the live icon. Ah, one of the top stories on al jazeera, after the deadliest escalation and violence since last years warn garza, a ceasefire between israel and the Palestinian Group is limited. She hand has been reached is due to coming to force in an hours time. At 2030 g. M. T. 41. 00 palestinians, including 15 children have been killed since the fighting began on friday. After israel launched what it described as preemptive strikes on garza, it claimed it was only targeting members of islam. She had the group as retaliated with hundreds of rockets fired across the border. But most have been intercepted. Natasha name has more from western reserve in there of course was a lingering concern that if the operation continued and the death toll rose in gaza, that perhaps hamas might be, it might be tempted to enter the fray. That is not something that repeats government has said that they wanted. They have said from the beginning that this operation was to target is slammer. She had to force an attack planned attack against israel, and that there was no appetite for a larger protracted battle. Ukraine is accused russia of again, shelley, europes Largest Nuclear power plant in the florida. Keys says Russian Forces damaged 3 radiation sensors on saturday night and his one worker was injured. In his efforts, your plant was captured by Russian Forces in march. It is still run by ukrainian technicians. Un atomic observe is worn military activity at the site. Could cause a nuclear disaster. The 1st grain ship to leave ukraine since russia invaded to is experienced a delay. And theres not docked in lebanon as planned. Eva zuni left a desk last monday, carrying tens of thousands of tons of corn. It was due in tripoli by sunday and its not clear why it hasnt arrived. Today comes as for more color ships, sail out of ukrainian port in the backseat Fire Fighting teams from mexico and venezuela helping cuba battle flames that have been off the Major Oil Storage Facility oil tanks at the facility in miss tons us have been burning since he was struck by lightning on friday. These one cuban firefighter has died. Next, its talked router 0. I have more news for you straight after that. Thanks for watching my finance. Ah ah. The history of humanity has been marked by its technological advances. The discovery of fire 2000000 years ago. The invention of the wheel and 3500 b. C. All the way to the Industrial Revolution in the 18th century. Throughout the times weve thought to make our lives easier. Though many argue some of those advancements have proven destructive in modern times, our ambition for a better life has taken us to the age of information technology, programming and Artificial Intelligence. A, i give machines the ability to do more. They can think for themselves, learn our preferences and behaviors. Communicate with us, suggest alternatives, and even do things only humans once did. Alexa, order, white dog a, i had slowly become an essential part of our life thats use in social media, has brought us closer with our families and friends. And its proven valuable at home and at work. But some say theres another more sinister side to Artificial Intelligence, e, p, o, p, an american computer scientists to meet gabriel has been one of the most critical voices against the unethical use of ai. Shes been vocal about issues around bias inclusion, diversity, fairness and responsibility. Within the digital space, google asked her to co lead its unit focused on ethical, Artificial Intelligence. But 6 weeks later, the pick giant fired her off. She criticized the companys lucrative a. I work gab ruth, considered one of the 100 most influential people of 2022 by Time Magazine has no most an independent Research Institute focused on the homes of ai, on marginalized groups. So whos behind Artificial Intelligence, technology, and whose interest does it serve . And with such a big influence in our lives, how democratic is it to use . Computer scientists to meet group talks to al jazeera to meet their brute. Thank you for talking to al jazeera. So to start with, it saw right at the start and just sit the scene a little bit for people who might not think about a i in everyday life. As the Technology Stands right now, how much are we using ai in every day to day life . How imbedded is it right now for most people. I dont blame the public for being confused about what it is. I think that many researchers like myself, who have gotten our ph days from a i who have studying a i are also confused. Because the const, the conception of and that we see in pop culture is in my view, what really, really shapes Public Opinion about what it is. And so it kind of makes me realize that pop culture has this huge power to shape peoples thinking, right . So i think when people think of i, they are thinking about terminator kind of things. These robots that are kind of human like and, or are going to destroy the world, or are you know, or either good or bad, Something Like that. But thats really not what is being branded as, quote unquote, a i right now, anything you can think of that has any sort of data processed and makes any sort of predictions about people what she tries of off calls surveillance. Capitalism is based on what is currently being branded as and any sort of chat bought that you use, for instance on whether it is alexa or siri or i guess these are voice assistance or chat, thoughts that a lot of Companies Use because they want to hire less Call Center Operators or things like that. There can be some sort of clinical and behind it. There is a lot of surveillance on in day to day life, whether it is face recognition or other kinds of tracking that go on. And that, that has some sort of a, i in it, there is recommendation engines that, that we, you, that we might not even know exist when were watching, you know, videos on tick tock or Something Like that, or advertise targeted advertisement that we get or music selections that tried it and for what kind of music we want to listen to next, based on what we listen to before. So its a very broad kind of branding, and it wasnt always the case, but i think that, you know, theres always the language to sure that people use in order to kind of sell more products or hype hype up to many of their products, in my opinion, so that is currently in my view, what is being branded as Artificial Intelligence. Thats really interesting because i guess when you think about using like even face recognition or getting a playlist recommended to you as you say. I mean, i dont think about that being a i, im just like opening my phone. I guess that something, you know, people thinking about it as they use it or is this just, i guess, going under the radar as, as just the future or what it means to use technology. Its very interesting because there, in my opinion, there is this deliberate rebranding of Artificial Intelligence thats going on so that people are confused by the capabilities of the systems that are being billed as, quote, unquote Artificial Intelligence. Right. So for instance, we even see these papers that say that computers can now identify skin cancer with super Human Performance there, and theyre better than humans doing this, which is really not true. Right . So scientists themselves are engaging in this kind of hype and corporations themselves or engaging this kind of hide. And what that does is, instead of people thinking about a product that is created by human beings, whether theyre in corporations or Government Agencies or military agencies like defense contractors, right . Creating autonomous weapons and drones. So instead of thinking about people creating artifacts that we are then using, we think about quote unquote a i as this, you know, some being that has its own agency. So what we do then is we dont ascribe the issues that we see to the people or corporations that are creating harmful products. We start do really the conversation talking about whether you can create a moral being or you can impart your values into air or whatever. Because now we are kind of ascribing this responsibility away from the creators of these artifacts like machines, right . To some sort of, you know, being that we are telling people have their own agency. Ok. So thats what is, what go you into your line of work. The ethics of Artificial Intelligence because it hasnt always been easy path. That seems initially i was just interested in the technical details. Face recognition is a, is something that is done under the Computer Vision umbrella or any other kind of thing that tries to make sense of images that seemed really cool, that you could infer certain things based on radios and images. And that was what i was interested in. However, there was a confluence of a number of things. So 1st of all, when i went to graduate school, i stanford, i saw this dark, the lack of any black person from anywhere in the world, in graduate school, and especially in, with respect to Artificial Intelligence, developing or researching the systems. So when i, when i was at stanford by and then i heard that they had literally only graduated one black person with a patient in Computer Science ever since the inception of the Computer Science department. And you can imagine the type of influence that this school has had on the world, right . You can imagine the kinds of companies that came out of there, including google. So i, that was just such a shock to me. So i saw not only the lack of black people in the field, but also the lack of understanding of kind of what we go through and what systems of discrimination. We go through in the u. S. And globally, really around the same time. I also started reading about systems that were being sold to the public and being used in very, very kind of scary way. So for instance, there was a public article that showed that there was a company that purported to determine the likelihood of people committing a crime again. And unsurprisingly, of course, it was heavily biased against black people. At the same time, you know, i see the kind of drill is a systems purporting to determine whether somebodys a terrorist or not, etc. And my own Life Experience tell, told me, you know, who would be most likely to be home by those systems. And who would be most likely to be developing those kinds of systems . Right . So that was the time where i started pivoting, from purely studying how to develop the systems and doing research on the technical aspects of the field. To being very worried about the way in which the systems are being developed and who they are negatively impacting, learning about the proposal of the existence of an algorithm model that purports to printing someones likelihood of committing a crime again with such a huge shock for me and by then it had existed for a long time. And in addition to to that, you know, and so this system judges used for sentencing for, for setting bale along with other inputs. And there are other systems other predictive policing systems. So one example of predictive policing system was something called prep poll that actually a la police lapd were, were using and things to a lot of activism from groups like stop lapd spying on this software. Stop being used by l. A. P. Actually my people in my field statistician, chris, dan, long and scientists. Well, William Isaac did a study that actually reverse engineered credit pool and showed that im surprisingly, it pin points, neighborhoods with black and brown people and says that these neighborhoods are crime hot spots right. For a says, drug use isnt one example. If you look at the National Survey for drug use isnt pretty evenly distributed in for instance oakland. Right. But the predictive policing systems, like print poll, they instead kind of a pin point black and brown with neighborhoods, saying that these are hot spots. And why is that . Well, the list of new history and the current realities of us. Were not surprised by that because these systems they feed in, they have Training Data that are labeled and the Training Data does not depend on who commits a crime. It depends on who was arrested for committing a crime. And obviously thats going to be biased. I want to come back to the issues around the dos that you put into a i and, and what the results that you get in a minute. But lets go back to when you were hired at google, what was it that you were hired to do . I was hired to do the kind of work that im talking about with respect to analyzing the negative societal impacts of ai and working on all aspects of mitigating that whether it is technical or Non Technical or documentation. So i was a Research Scientist with, you know, the freedom to set my own research agenda. And i was also co lead of the ethically, i team with my former cozy mitchell. And so our job there was also to create that. 7 agenda of our Small Research team, which is again focused on minimizing the negative societal impact of a i. And as you say, theres a lack of diversity in the industry. You knew that, you know, if you know that since you got into this. So what were the realities that of, of going into this, this mega, huge company as a woman of color, trying to do that job . It was incredibly difficult from day one. I faced a lot of discrimination. Whether its texas, im orations, im from day one. I tried to raise the issues, but it was exhausting. You know, my, my colleague mitchell and i were just so exhausted and doing research was basically, you know, working on our paper is and discussing ideas felt like such a luxury because we were just always so exhausted. By all the other issues that we were dealing with, you eventually put out a paper which led to your being dismissed, but google says you resigned. But put that to the side this, this paper looks at the bias is being built into a i machines basically reflecting the mistakes that humanity has made, is perpetuating history. A foregone conclusion. When we, when we talk about ai or is there another pause . I always believe that we have to believe that there is another path. And this comes to back to the way in which we discussed as just being its own thing rather than an artifact, a tool that is created by human beings, better in corporations or an educational facilities or other institutions, right . So as long as we have to know that we control what we build and for, and when we build it for and what its used for. So there is definitely another path. But for, for that other path to exist, we have to uncover the issues with the current path that were going on and remedy them and also invest in terms of research in those other paths. So for instance, on this paper that i put out called on the dangers of the cast, the parents, it talks about this huge race that is going on right now on developing what are called large language models. And so these models are, are trained on massive amounts of data from the internet straight from the it, right. And so you and i are not getting paid for the content that we put on the internet that, that is being scraped to train these models, something just to just to make it really simple. I mean, something that i hadnt even considered what youre talking about is, you know, giving a, i all of the information of the internet. And of course, its going to, you know, spew out some of the, the worst parts of the internet. Which are, you know, often predominant, but if we give it a small, a dos it or if we q rate the data, then were going to get something that might be more helpful for people. Is that kind of putting it to simply actually that is one of you know, we discussed so many different kinds of issues in our paper. And one of the issues we discussed is exactly what you mentioned in terms of curating data and using, you know, large im curious data from the internet with the assumption that size gives you diversity, right . And so what we say a site does not give you diversity and we detail so many ways in which thats not the case. And one of the suggestions that we make is to make sure that we carry our data and we understand our data. And we believe that the dataset that were using to train these model is too large, too daunting to overwhelming for us to understand it documented and curated, then that means we shouldnt be using that data. Right. And so. 8 this is, this is kind of one of the things that were talking about. Another thing that i thought was really fascinating that i guess we dont consider in our daily lives is that at a macro level, the funding for a lot of the technological advances that trickle down to us begin either with the military or with these massive tech giants that, you know, can they have our best interests at heart . This is precisely what i talk about to with the found in our new, my name is to and distributed and Research Institute, right . So a lot of when you look at history of whether it is things like Machine Translation or self driving cars, right. So self driving cars are good example. They were very much funded by dar paw and defense funding agency. Right. So its not because theyre interested in, on accessibility for, for disabled people, right . If theyre interested primarily, and autonomous warfare. So how can we assume that something that starts with that goal and that funding insight will end up being something different. And so i often give this example, you know, when people talk about ad for social good, right . They talk about kind of reorienting, some of these things that we already built for clinical social good. Whereas for me, its kind of saying that, ok, we build this tang 1st and then we try to figure out how to use this thing for something other than warfare. Maybe we can use it for farming. Maybe we can use it for something else. But the thing is we, we already made the tank right. We designed the systems so that they become at tank like 1st a specific goal and outcome which is warfare. So thats exactly how weve been designing our technological systems. If you look at the history of a i, when its funded by the government, wendy fund, basic research in this space. And when they have all of these collaborations with large tech companies, when you look at really prestigious top schools like mit, theyre huge military contractors, right . The lincoln lab. So i think that as human beings, we have to look at ourselves and say, what are we building and where are we going and why are we building this thing . And we can have a different path. You mentioned governments there. Im wondering in your line of work, have you seen governments used you know, in a way that, that if we all knew about we would be, we would be upset to find out about like a government in, on using data for ally in ways that perhaps we we might not be aware there are, you know, all sorts of face recognition related uses by Law Enforcement, for example. And recently, my colleague, joy blood, meaning my co author and friend, who also had on the algorithmic justice league, had a series of videos and all pads and other educational material describing i d dot me read. This is a i rest is i was asking people to submit basically biometric information. So in order to log in and file taxes and do all sorts that and get all sorts of government services, they were using this private company idea me to as a verification mechanism. So then this company has your, your biometric, this private company has your biometric information. And this is getting proliferated everywhere, like if you look at in airports and they are now using all sorts of, you know, face recognition related things to, to verify that its, you and i dont even know exactly who theyre using and how this is getting proliferated. So every day were finding out about new uses that we never knew about. We never voted on. We never learned about we were never educated are the worst example for me is clear view. So clear view is this company that has been under fire for using so many peoples face data from the internet brain. For example, i believe facebook and i do training. All of these are automated facial analysis tools that are being used by Law Enforcement and all sorts of groups around the world. And its been sued for a number of things, right . Like for example, there use of this kind of data. And then you also find out that they have all these partnerships with all sorts of governmental agencies. And so we, because i think people will be watching this at home going, oh my gosh, i did give my biometric information or, you know, i do get scanned at the portal all the time for people watching at home, who are concerned about how their daughter is being used, concerned about, you know, how thats being fit into Artificial Intelligence and you know, the whole thing. What can people at home do . What do you do . Do you, do you avoid certain things or certain places or, you know, im just trying to think of some, some helpful things at home for people who are watching going. Oh no, you know, i need to do something in my opinion. You know, the biggest thing we can do is that we need to advocate for regulation that puts the onus on companies to make sure that they keep people safe to make sure that they dont, they, that they prove to us before they put products out there that theyre not harmful. The issue right now is that were assuming that the onus is on the public on each one of us and how many, how many times can you do this . Right . Like every single thing you click on every single thing you use, you have to make sure that you know, you have the privacy settings, right, etc. And i think it just, you know, people say privacy by design, right . Or we should have Something Like, i guess, or l fiddler saying fairness by design or Something Like that, like justice, by design, right . We in the where the onus is on the designers and implementers and not on the public and to, to go on their daily lives and make sure that they spend all their days reading every terms of service and making sure that they click on, you know, certain things and not others. Weve talked a lot about about, you know, Big Tech Companies about the vast amounts of data being collected is ai, inherently anti democratic. And thats a very interesting question. So there are, there is a segment of people in a i that whose goal is to create whats called an artificial general intelligence. What does it mean to create artificial general intelligence . It means that youre trying to create this being that knows everything can automate any tasks. So, so if youre a corporation, then you can have this thing that does all the task for you that you dont have to pay. I dont care about, right . And so that kind of goal, i find extremely strange, inherently undemocratic. To me personally, any aspect of a i, i see it as a tool as a tool that we can build for specific needs of community. So if you build a tool based on the needs of specific community. So for example, i had talked about and you know, people that to inquire, believe Radio Station in new zealand. Right. And using a Language Technology for revitalization of the mari language. Right. And so that to me is, is an, as definitely a democratic goal, it is a goal that is allowing people to use their language and culture after it has been beaten out of them because of coolant colonization. So if we decide that that is what were going to do, we can do that and we can create our funding structures, design our systems and processes accordingly. So again, you know, i think that there is a world in which we can build a i tools that are beneficial to people. But that means we have to do a rethinking of where the money comes from and how we do our research and development process. However, there is to segment of people and a who have this weird goal of artificial general intelligence, which i believe is inherently, i democratic timmy deborah, thank you for talking to al jazeera. Thank you for having ah, hulu. Ah safe going home and then International Anti corruption Excellence Award boat. Now for your hero ah, revealing Eco Friendly Solutions to combat threats to our planet on al jazeera. Ah

© 2025 Vimarsana

comparemela.com © 2020. All Rights Reserved.