comparemela.com

Comparing his rose mass, killing of palestinians in garza. So the whole of course drew an Immediate Reaction from israels prime minister. Benjamin netanyahu president silly by his grace the memory of 6000000 jews worded by the nazis and his demonize, the jew estates like the most virulent anti semites, he should be ashamed of himself. And on monday, israels foreign minister summoned brazils ambassador to the countrys Holocaust Memorial for a public pointed dressing down, culminating in this will settle model and way as i asked to convey to president lulu would not forget, no, forgive, and i declare him my name and the name of the citizens of israel as persona non grata and his royal im as long as he does not apologize to you and take it back. The president lula is not backing down here at cooper as it was, and busted it to israel for consultations and some of these radium baskets presume for an explanation as this diplomatic dispute continues. John home and ill just say to the widow, a former haitian to president , jovan elmore say is one of 50 people charged in connection with his assassination. Nothing more say is accused of conspiring with full of prominence to call joseph to kill her husband. The former chief of police has also been indicted with i was killed in his time in the capital of port or prince in 2021. When i saw from 8 till mccray, you can find much more information on our website. That is l just sarah, dont com. Studio b v i series is coming up. Next i will be back at the top of the hour with more news to stay with us. The, the President Biden says once a 2 state solution for palestinians and israelis, what does anybody believe its doable . What this is real for . Im gonna say it back to us foreign policy. And what are the long term consequences for the region and the world . A quizzical look at us politics, the bottom line. The Artificial Intelligence a is already transforming our societies and economies, creating jobs and groups. But are we ready for the dark side . How its consolidating, power, weakening nation states, corrupting our information, ecosystem, and destroying democracy. The, my name is maria risk, and in this episode of studio be on Artificial Intelligence. Well be hearing from to inspiring women, courageous whistle blowers and their own rights working to make a i favor safer, more responsible. Camille francois is a researcher experience in combating this information and digital harms. Today, shes helping lead french of president microns Landmark Initiative on a i and democracy marriages whitaker, blues of woodseau, inside the industry about a i largely unchecked car. And let the google walk out in 2019. Now with a shes president of signal secure messaging. So, how do we protect ourselves from mask this information and distinguish between whats real and st online . I am reading surveillance into our lives leading our new discovery. And how do we make the big tex account, the meredith . Im so happy to be here with you tonight. So youre the president of cigna all my favorite way to send messages these days in your, in a scholar in your own right. You co founded in a Research Organization called a i now which you continue to advise. And i actually have the great pleasure of sharing an office with you when were both colleagues at google. Yes. Whos 10 years ago . Well, um, 2 years ago, back then you were already interested in Machine Learning and its impact on society. Yeah, i mean, i remember these formative conversations talking with you talking with a, you know, thats a concern, whispered network about what is going on. Why is this, you know, at that point more on proven still out, improve in technology and being infiltrated into so many products and services at google. Why is everyone being incentivized to develop Machine Learning it . What actually is this and why are we trusting such significant determinations . About our lives and our institutions to systems that are trained on data that we know to be faulty, that we know to be biased, but we know not to reflect in the context or the criteria in which these systems are used. So that was the beginning of what i think at the time we call the Machine Learning fairness conversations. Yes. Over and across the company. Well that was i think it was around 2014. And so i think the button years ago it was. Huh. Uh, looking great um, i think that is an important date because we can, is zoom out on the conversation on Artificial Intelligence, which sort of touches everything and nothing at once in our current context. And we can actually recognize that Artificial Intelligence as a term is over 70 years old. Right . So then we need to confront the question, okay, so why now, why in the last 10 years . Is it so heights . But it is a Pivotal Moment for a i right now within to say that, well, were certainly being told that and theres certainly a lot of resources, a lot of attention, a lot of investments that is riding on this being a Pivotal Moment. But again, what happened in 2012 right and 2012. It was shown by a number of researchers that techniques developed in the late 19 eighties could do new things when they were matched with huge amounts of data and huge amounts of computational power. And so why is that important . Well, huge amounts of data and huge server infrastructure is these massive computers which are new and more powerful chips are the resources that are concentrated in the hands of a handful of Large Companies based in the us and china, largely and that are as of the product of this surveillance advertising Business Model that was shaped in the ninetys that grew through the 2010s. And then, you know, in 2012, there was a recognition that we can do more than so advertisers access to demographic profiles for personalize advertising on your g mail on your facebook, wherever you encounter it. We can use this same resources to train i models to infiltrate new markets, to make claims about intelligence and computational sophistication that can give us more power over your lives and institutions. So i think we really need to have a Political Economic view to look at the Business Models who wins and who loses. And then look at whos telling us this is a personal moment. Right . So thats interesting because the way that our current moment is also being framed is around this rupture. And so sort of the moment that leads to the generation of technology, and they are that we called generating. They are, and i think that what youre saying is, its not particularly helpful to see this as a, as a rupture. And its helpful to see the continued reading a of the development of a i. Well, its helpful to investors who are recovering from losses on the matter of hers who are recovering from losses on web 3 to see this as a transformative moment, right . Theres are a lot riding on this, but youre not necessarily going right. You can make a lot of money by people believing its true long enough that you get an idea or an acquisition. It doesnt need to actually be true. But right, we didnt Start Talking about those in 2017 right. Like we hadnt been involved in, inundated with claims about, you know, a, i mimicking or surpassing humans in the same way we are now chat. Gpc wasnt everywhere. When did we Start Talking about this . I started playing has to be to you, i would say around 2018, but a but i have always been ahead of the curve. I. I, you know its, its, i think its just like playing with those models. I remember playing with gigi to and trying to think about what this means for our society if everybody starts having access to the means to create synthetic text. And at that time, there was not a lot of uses of these generated a models for text. And so, you know, i was paying this bill, this idea that when we had deep fakes, we werent going to have read fakes. But sort of now hold ocean, a synthetic text that was going to take over all of these online spaces. And i thought it would be fun to have to have the a i, right, and all fed around the consequences of read face, which i did it generated this matter for which i thought was really interesting of synthetic texts was going to be the grey girl of the or not, it was going to suddenly creeping everywhere as sort of a Science Fiction id or that it was going to really ruined a year. And as we knew it, and i think that was there was 2019. Yeah. Now were getting self awareness. Yeah. Something along these lines. Yeah. Of course you and some practitioners and people who are some deep in the field had been playing with these different techniques for awhile. But it wasnt until microsoft started spending millions of dollars a month to stand up and you know, and that took to create and deployed chat g p t that we started talking about it. And we need to recognize that a car, us hundreds of thousands of dollars a day to run this, but its actually extremely computationally expensive. And so that these are not, you know, these are not technologies of the many, right . We are the subjects of a i, we are not the users of a i in most cases. I think this is also why it has, for some of us felt like a Pivotal Moment because back when it was very much, you know, still Research Projects or conversations between practitioners. We kind of have the luxury to ask ourselves, well, what does it mean for instance, for this information that now everybody can produce synthetic text or what does it mean when we know that they are biases and stereotypes that are imbedded in this machine . How would we go about thinking the impact on society . And i think that changed a scale in terms of the urgency of those questions when suddenly everybody has access to these technologies and theyre suddenly being deployed really quickly in society. Theres been a community of scholars who have sort of preceded a lot of these, you know, advancements or microsoft deciding just to deploy a text generator with no connection to truce onto the public. You know, those decisions were not made because they were reviewing the scholarship, were reviewing your work. Camille were recognizing other social consequences. Those decisions were made because every quarter they need to report to their board positive predictions or results around profit and growth. And so we have these powerful technologies being ultimately controlled by a handful of companies that will always put those objective functions to use the Machine Learning term. First hardaway leverage power. Well, theres a classic answer to that, and that is, you know, workers banding together to use their collective leverage to push their employers. I love it, i find it very differential here. Yeah, well, i was almost not here because of the eurostar strikes. So i hope they, when im yes, lets talk a little bit of bouts. And those hardens that youre saying, a lot of people have been talking about has been documented. Thats explained for instance, what do we mean when we say theyre coded bias in these . All right, then these Machine Learning systems in this bigger is better paradigm which weve been in since 2012 relies on Data Collected and created about our present. And our past, this data in the context of text generation comes from things like for chance, read it youtube comments, wikipedia. And of course that data reflects, are scarred, past and present, which is discriminatory which has been and is racist. And massage in us, which sees different people as are deserving different treatments. So of course that data is going to be reflected in the outputs of the system. And now the danger here is that we by the height. And that we say this is the product of a sentence, an intelligent machine that isnt giving us objective truth. And that is just where that person fits and society. They have the genes of a janitor, right . And so i think theres, you know, when we see the rise and this eugenics thinking, when we see this blind face in machines, we really need to recognize what exactly that is naturalized and your right. Of course, to talk about how so many of these stereotypes are also here inherited from these Training Data that are taken from, you know, Online Sources that are, that are not diverse and well informed Online Sources. And its also been an issue with the Machine Learning for a long time. And before it was trained on those large Online Sources for face technologies, no matter if its text the matter if its patient analysis was already the case when we were also doing, you know, image recognition. I have have, have these issues is biases and coded biases. And spitting outs, 3rd types, of course when we talk about generated a i, one of the things that i find really important to highlight is so im an optimist. I know it. And i think that sometimes folks have had too much faith. And the idea that with bigger models and more data, we werent going to had in generally the right direction in slowly Getting Better at tackling. Theres vices, those discriminatory impact there was in, but its, there are times and we now recently have research that says this is actually not what happens a be the berryhead and 2 of her colleagues just published a wonderful paper that you actually, when you get to bigger models that are trained on more data, those racial bias is induced, there are types get worse, you get more of them. And here you can sort of see that were losing the race and under investing and how do we tackle and mitigate and understand those . Those are technical impacts of these technologies because were just scaling them too fast, right . Were not catching up with the problems that we know exist. More of the problem doesnt solve the problem. Thats right, right. So i think, you know, i never understood the basis for an assertion that like, well, you know, we have a little bit of trash and that makes the trash you model. But lets poor a bunch more trash on there. Oh, because thats going to scream it out. Right, theres a, you know, magical thinking, and i think a, a real, like almost emotional desire by a lot of the true believers to avoid the fact that maybe some of these problems are intractable. Maybe we cant sort of, you know, create a data set thats unbiased. Because of course, the data as always reflecting the perspective of its creators, and that is always biased, right . How do we change this paradigm . And how do we make sure that the people who also work on making technology safer, more fair, more responsible vera for its can also be accelerated in their voice. Can be centered in the way we talk about a, i mean, i dont think thats a technical problem. Right, that is a problem of the incentives that are driving the tech industry, which are not social benefit. Right. You know, you and i know we got no way of these people a lot. There was not always appreciate and i always loved your your willingness to ask those questions anyway. But i was pushed out of google for asking these questions right for loudly asking those questions for organizing around these questions. Right. So there is a point at which, you know, when youre talking billions of dollars versus a level will future, we have a system that is choosing billions of dollars repeatedly repeatedly, repeatedly and in the context of a system that is now giving this kind of authority, surveillance and social control capabilities to a handful of actors. I think that is a, thats an alarm worth raising pretty broadly can regulation and governments play a role in re establishing a little bit of balance in that system. Yeah, of course if theyre willing, right, labor organizing can help that social movements can help that regulation can help that. But yeah, regulation is an empty signifier till we fill it with specifics. All right, lets most here, lets do a little q, renee and then well talk about how to dismantle those Business Models and how do we build the future we want. Okay, my question is you very briefly touched upon that, but when the systems that we encounter and every day in this digitalization and the or were going, the very basis of it is, is basically based on private interest. How can we really, truly create meaningful change . I think fundamentally thats a question about capitalism, not a question about technology. And so how do we change that hamster wheel . That is sort of, you know, in order to avail ourselves of the resources needed to survive, we do waged work. Our waged work contributes to structures that we may not agree with. And so ive come up with, you know, their social movements, you know, i, i participated in labor organizing after being are kind of in house and public intellectual and the expert in thinking that that was a theory of change. Right. Im now with tech executive trying to do tech another way that is not profit driven, right . Thats another theory of change. And i think theres a International Workers of the world where a, a union in the us that had a phrase, a come slogan that stuck with me. And thats the day where you stand. So its like, what is your role . What is your, you know, who do you know . What can you do with the knowledge and context you have . And i think thats a question to ask yourself every day. But its not a question somebody else can answer. I agree with that, and i will say, i struggle with that question too because im in, in my fields of practice for security and trust and safety. The fires that are in the building are the things that you have to attend to immediately. You also want to think about the immediate future, but its also important sometimes to put out those fires because it can be that your election is at risk. It can be that kids are at risk online. It can be that, you know, terrible, suicidal impacts are and folding in front of your eyes. And, and, you know, one of the things that has been focused on recently is how can we make sure that across the industry, the folks were focused on putting out those fires are better equipped so that we dont have to reinvent the wheel every time. So making sure that we have the rigorous frameworks, the tools, and that all of this is easily accessible are open source and that people are properly equipped to do this work so that we can all serve. Invent alternative futures while we take care of the immediate harms us as something thats been very top of mind. Lets take another question. Do you think that the very public accelerated move towards in a i am facilitated workforce . Has the potential to hold a mirror of to some of the absurdities of the capitalist system and its current state. I want to point to sort of an example that will perhaps illustrate my views on this, which is the v. A strike in the us and the, the w j. The Writers Guild of america is a well established and fairly pop or for union that represents writers in hollywood. So the, the tv shows and that the film as you see, and they struck for a long time over the role of a, in the workplace. So they were saying, you know, you are not studio executive signing a contract with microsoft for g b t. Going to to introduce that to our labor process in a way that justifies your changing our title. Youre firing us as full time workers and hiring us back as precarious contractors. Youre reducing our wages and youre ultimately degrading the role of artwork. So, you know, i think, i think they won some pretty serious concessions in that strike. But what that episode showed is that, you know, were not actually talking about a replacing workers and a lot of cases. Theres a research recently published that estimates a 100000000. 00 people, a 100000000. 00 people are currently employed or have been employed recently in the task of cleaning and curating data required to train these systems, which is extraordinarily labor intensive. Then you have to do a very serious calibration process because these things are trained on for chan and read it and some of the most obscene and disturbing content on the internet. It is not often filtered out. And so schumann beings are the buffer for that. They see this content they have to say, no, thats not right. No, thats not right. And then you have to have, you know, something that camille is very familiar with. You kind of the clean up crew, the content, moderators the people who deal with the fact that these systems often say, you know, the wrong thing. So theres a huge amount of labor that actually power is what we call intelligence in the systems. So we should be, we should be aware of that dynamic, know that i cannot replace workers, its displacing work. And that it is a tool of, you know, employers, governments, you know, those in power are the ones with the resources to decide where to use. And it will very likely be used on us in ways that we need to push back against one more question. So i work and people on netflix at a big company. And this is something i struggle with on a daily basis. So if i get a i and just think more basic, ill go ahead and make hiring for example. So in the process like this, when you already have something very human being, is going to be subject to tremendous biases. Could something that can very well regulated technology or an algorithm actually help . Its a great question and i think we can start by saying its a fair question. Right . People have biases. Why does it matter if we also have machines with viruses and or the so many reasons for that in the 1st one is we dont know just at how those biases matter fast. We have a hard time measuring them. And so we can think about it in sort of like 2 questions. The 1st one is, what is the right set of areas in which we can deploy a i knowing that its in perfect. The 2nd set of question is in which ways are you able to detect and mitigate. So these potential biases, one thats popular, although that too is a mean is this idea of red teaming. So you would say this is the system that i want to use. This is the context in which i will use it. I am going to trigger all the bad scenarios that i will want to not happen so that i can understand if they are about to manifesto and i can understand if im able to mitigate them. So if youre able to answer all of these questions, then yes, by all means, go deploy technologies on areas that you think are not going to hurt people at scale with message that you have tested and can rely on to ensure that you have a awareness of these potential shortcomings and youre able to mitigate some but thats often not the case and its often not the case and in and people on olympics for sure. Yeah. I would agree with that. I would also say, you know, humans can be held accountable if they can justify the decisions and what can we help present . And so i think is a very good answer. But its also a counterfactual to the world. We live in, right . Yeah. Your or whoever is doing the contracting for the vendors is being sold a pitched by some company thats probably re scanning and amazon, or google a p i to make claims about sort of detecting in our competence that again, things like that. There are no, you have no scientific justification. And so i think you could build a system that help sort through resumes that that requires an, you know, ecosystem of good faith actors putting that use case above their self interest in many cases that we simply dont live in as i am encouraged by the fact that some cases has been and has been demonstrated with people being held into accounts. So ill take the example of proctoring software throughout the pen, demik. A lot of universities and education institutions. I turned to to a i in order to have students take exams at home while being surveilled or monitor. And they were very clear at places where this types of software had simply not been tested. And so what happened is students started sharing, organizing and saying, my face is not being recognized by the software. Im getting a bad grade because it thinks that ive cheated. I think that this is a violation of my right. So again, this is, this is a, you can still ask the right questions when you fail to have the right questions asked and put the right measures in place. We, we see those movements towards accountability, the military, what government, what Multi National doesnt want access to this hyper hyper hyper powerful, hey, i doesnt want to be the one whos sort of controlling it doesnt want to imagine themselves at the helm of desktop seals, technology companies, have a responsibility to tackle this information and then to think about how their technologies can be abused to manipulate elections, the the latest news as it break some of those who need a may be able to access it in parcels to them that are safe. And the way there is no fighting with detailed coverage. The conscripts the training here on an island that symbolizes swedens rustic change in military policy. From around the world, the Security Forces are not protecting the people, but rather the president and his decision to the lady elect, the ukraine, a father, front line, a mother and children. And this time of the seas future when the sky is full, the site is without an ukraine escaping to talk with our activities designed to help young mines come to terms with children with very different experiences, but also suffering. Some degree of trauma gung. Could children such as 12 year old carrillo struggled to talk about their experiences . We would take it into buses. He tells us to russia where we stayed for a bit over a month. Ukraine estimates about 20000 children have been full simply relocated to russia, despite repeated efforts and international mediation. Only about 400 children have been with us. For now, the focus is on reclaiming ukrainian children and helping them recover the hello until mccrae. This is l 0. Ally from dell, welcome to l. Special coverage of the United Nations highest quote, which is beginning a 2nd day of hearings into the legality of israel as 5060. 00 occupation of palestinian territory of palestine put forward its case on monday. Now 52 countries will present their arguments over the rest of the week. The hearings considering what

© 2024 Vimarsana

comparemela.com © 2020. All Rights Reserved.