comparemela.com

Hi. How are you feeling . Good im very excited. My name is sandra khalil. Im the head of partnerships with all tech is human. I have the honor of introducing our guests today. Dr. Joy buolamwini is founder of the algorithmic Justice League, an ai researcher and an artist. She is the author of unmasking a. I. My mission to protect what is human in a world of machines. Her mit research on facial recognition technologies galvanized the field of ai auditing and revealed the largest racial and gender disparities in commercial products. At the time, her tedx talk over 1. 6 million views on algorithms algorand mc bias served as an early for current ai harms her writing and work been featured in publications including time magazine. She is on the inaugural list of. Time 100 ai new times, Harvard Business review and rolling stone. Dr. William sweeney is the protagonist of the emmy nominated documentary coded bias. She a rhodes scholar, World Economic young Global Leader and recipient of the technological Innovation Award from Martin Luther king jr center. Fortune magazine named the conscience of the revolution. Dr. Bowman mooney earned, a ph. D. And masters degree from mit, her masters of science. From Oxford University with distinction and a bachelors degree in Computer Science from the Georgia Institute of technology. Sinead bovell is, a futurist and the founder of way, an that prepares youth for future with advanced technologies and a focus on nontraditional and minority markets. Chanel is a regular tech commentator. Cnn talk shows and morning shows. Shes been the educator for the non by vogue magazine and to date, she has educated over 200,000 young entrepreneurs on the future of technology. Chanel is an eight time United Nations speaker. She has given formal addresses to president s, royalty and fortune 500 leaders on topics ranging from cybersecurity to Artificial Intelligence and currently serves a Strategic Advisor to the United International telecommute location union and digital inclusion. Thank you. Hello everyone. Everyone can hear me. Okay, well, we made it. Needed. Dr. Bull and weenie joy, my friend, my fellow sister. This is such an honor and i think to kick things off, you know, two terms that i think need to be a part of the everyday discourse that we all need. Understand that really stood out to me in our book. In your book, the first is the coded gaze and the second is the x coded. So what is the coded gaze and who are the x coded . Got it. Great way to kick out. Before i address that, i just want to thank all you for coming out. The first stop of the unmasking book tour, ford was the first. Ford was the First Foundation to support the algorithmic Justice League. They supported my art. Actually, agile has an exhibition piece here at the Ford Foundation gallery, so please do check out. And now to the kodak days. All right. So whos heard of the male gaze, the white gaze, postcolonial gaze. Okay, look, how did gaze extend that . And its really a question of who has the power to shape the priorities of technol g, but also the prejudices get embedded and. My experience of facing my coded gaze was what you see on the cover . It was halloween. I had a white mask around and i was working on an art project that use face tracking and didnt detect my face that well until. I put on the white mask and i was like, dang, vernon already said a black skin white mask. I just didnt think itd be so literal. And so thats what started this journey that became the algorithmic Justice League. And really we are focused on to the second term, the x, right . So those who are condemned and convicted. Otherwise exploited or excluded by algorithmic systems. And so the focus is, how do we liberate the coded how do we actually make sure that the benefits of Artificial Intelligence are for all of us, especially marginal communities and not just the privileged few . And so what are some of the ways algorithmic bias and discriminate are being a part of the excluded could be impacting all of our lives. I mean, think of a ism and its there, right . So you can think of a i deciding who gets hired, who gets fired. Amazon had a hiring tool where they that if you had a Womens College listed you got deducted. There have been other hiring tools that have been evaluated. If your names jared, you play lacrosse, you might get some points, right . So so thats one kind of an example. I also think about ai systems within medicine and. So you have these race based clinical algorithms that arent actually based on the science. People get denied vital. So thats another space which it can creep up. Education as well. You might be flagged as having used the chat bot. They show studies that actually you might be flagged not because you were cheating. English like me could be your second language. So those are some of the everyday examples in which people get x coded. And then my work has focused a lot, as you many of you know, on facial recognition technologies. So i think about people like Portia Woodruff, who was eight months pregnant when she was falsely arrested by ai powered facial recognition, skin misidentification. So sitting in a holding cell, having contractions when they finally let her out, she had to be rushed to the emergency room. Right. So thats the type of algorithmic discrimination, putting two lives in danger. We could go on. Its a horror story. Its halloween and there are some profound example. More examples in the book from a driverless vehicle. Not maybe not seen you. The list goes on and on and my jaw just dropped. Every one that i read. So in the book you talk about your viral tedx talk and if you havent seen it, i highly recommend it and you also discuss some the comments that you received. One such comment was algorithm are math and math isnt biased so can Artificial Intelligence ever just be a neutral objective tool thats great question. And ive had so many ai rolls, even one of the book reviews was like, youre telling me cpus computers racist. So how can this happen right. And in fact, i got into Computer Science because as clueless people are people are messy sighs hoping i could be in the abstract world and not really have to think too much about bias. But when we look at Artificial Intelligence and particularly Machine Learning approaches that are powering many of the systems were seeing today, the machines are learning from data. The data reflecting past decision. Right. And we know lets the gatekeepers for who gets hired might not be so inclusive. And so thats where the bias starts to come in when you have systems that are looking for patterns and patterns reflect a society. So im not saying one plus one doesnt equal what you think it was going to equal, but im saying once were applying these types systems to human decision making, the bias creeps in. Right . And i think that is something that we hear often that technology is just a neutral tool and its up to us for how we get how we use it. But you make a really important in your book that there are decisions that get made prior to the technology, even being deployed. And those decisions very nature of doing things like classifying people cant be neutral. And i think, yeah, that was a section that really stood out me and i want to read a quote from your book and this quote gave me chills. So i thought that this would be the appropriate section to read out. So seeing the faces of women i admired and respected next to labels containing wildly incorrect descriptions like clean shaven adult man was a different experience. I kept shaking my head as i read over the results, feeling embarrassed that my personal icons were being classified in this manner by ai. When i saw Serena Williams labeled male, i recalled the questions my own gender. When i was a child, when i saw an image of school age, Michelle Obama labeled with the description toupee. I thought about the harsh chemicals put on my head to straighten my kinky curls and seen the image of a young oprah labeled with no face detected. Took me back to my white mask experience. You went on to say, i want people to see it means when systems from tech giants box us into stereotypes, we hoped to transcend with algorithms. So how you called attention to these specific stereotypes was through a poem you wrote called i aint i . Woman can you tell us more about this poem and what it means to a poet of code . Oh wow. That gave me chills. Yes, reliving it kids are mean out there. Id always be asked you a boy or girl when i was growing. So i think its somehow ironic that this ends up being my research. So after i did the gender shades at mit, where i was doing my masters degree and the results were published, the results had performance showing, okay, for ibm, for microsoft and then later on for amazon. These systems work better on mens faces versus womens faces on a lighter faces versus darker faces. And then we did a little intersectional analysis. So extremely we saw that it didnt work as well on the faces of dark skinned women like me. And so when i observed that from the data, i wanted to move from performance metrics, Performance Arts to actually humanize what it means to, see those types of labels. And thats what led to a i tired woman. At first i thought it would be an explainer video like ive done with other projects. And then i was talking to a friend and they said, can you describe what it felt like. And as i started to describe it. He said that, sounds like a poem. So the next morning i woke up with these words in my head, my heart smiles as i bask in their legacies, knowing their have altered many destinies in her eyes. I see my mothers poise in her face. I glimpse of my aunts grace. I was like, oh, was happening right. So i cant. I kept going on right. Can machines ever see my queens as view them . Can machines ever see our grandmothers as we knew . And the descriptions you just shared . Right. So to see Sojourner Truth labeled clean adult male, those are the queens i was talking about. And that led to what my ph. D. Ended up focusing on, which was both algorithmic audits like the gender shades paper, which showed performance metrics, but also evocative audits, like 80 woman, which humanizes what ai harms look like or feel like. I love that you use that word to humanize this. So when you decided to pursue algorithmic bias as the focus of your research, this was 2016. It wasnt a topic many had heard of, and it certainly wasnt really discussed in the public. And then your work it courageous early takes on big tech or some of the tech giants are calling attention to some of the harms in their facial recognition systems. Some of the companies lashed out at you and some did come to your defense like dr. Timnit gebru, someone that we also all adore and love and shout out, shout out. Tell me. But others were fearful to come to your defense, as were some of the academic labs, because they feared it would impact their ability to, get funding or to get a job. So as a student pioneer green, in this research, how did you navigate and in your opinion has the sentiment shifted or the fears over career repercussions still hinder open . Discussions about ethics . This is such a great question, will say now that i lead an organization i have more for administration right and keeping things funded and all of that at the time as a grad student i felt that timnit gebru, deborah raji and i, we were really sticking our necks out and i couldnt understand why more scholars and werent speaking up as much until i started to follow the money trails. So many of these Large Tech Companies fund many Computer Science degree programs, particularly phds. I happen be in a place where my advisor didnt have a ph. D. He was on a nontraditional path. I had aspirations of being a poet. So all of these things helped me not feel so much that if if i poked the dragons, they were fire breathing dragons, i would be completely eviscerated so i do think there is still a fear of speaking out. I do think the work of gender shades helped normalize this conversation so others could speak gender shades. One of the things i did, which i was cautioned, was actually naming the companies usually its a company, a company, b company, c, keeping my funding like this. Good, right . So to name it. But now this is a common practice. And i also have to commend the senior academic who did come to our defense. And later on i did hear there was a cost to doing it as well yeah, i think you i think the research with gender shaded it gives us data to point to and determine ology that we all need when we want to advocate against some of these harms. So i have to ask it. There are many voices in the world of ai who believe that superintelligence and the potential for ai to cause humanity to go extinct. Those are the most important harms we should be paying attention to. So as someone who has dedicated their entire working life to combat in harms and i are these the real risks we should be tuning in to x risk. When i think of x risk, i think of the x coded. So i think about the person who never gets the callback and you explain what x risk is to people that oh sure exist that you want me to talk about what the doomsayers know, just explain the experts. Is just the existential risk kind of thing. Sure. So youve seen terminator. Yeah. People on the internet, youve seen the headlines. Its the another world as we know it is here. Were going to die. Thats exodus. So i could become so intelligent. It takes over the already powerful and they become marginalized. This is my take on the x risk. They become marginalized and would be terrible. The face of oppression, wouldnt it . Right. So this is x risk as i see a and i what i notice with doing this work since 2006 18 is sometimes there are intellectually interested in conversations that happen within theoretical spaces. Right. So what if and then we continue from that. What if . So we have that with what if ai systems become sentient, which theyre not. Right. What those general intelligence look like. And i think sometimes can be a runaway narrative that is fictional, which doesnt reflect reality but gets a lot of attention. And the problem with getting so much attention is it actually impairs the agenda for aware funding and resources are going to go. So instead of seeing what we can do to help Portia Woodruff or Robert Williams falsely arrested in front of his two young daughters, the money goes elsewhere. So thats kind of the danger that i see. I think its one thing to have an interesting intellectual converse asian, but thats not necessary only whats going to help people the here and now i mean like in the book how label theres hypothetical and then theres real risk that exists today. And one more thing i wanted to talk right. Ive supported the campaign to end killer robots. Ai systems can kill us slowly. So thinking of structural violence, its not the acute, you know, the bombs drop or the bullet is shot. Its when you dont have access to care, when you live in environments or Housing Conditions that worsen your life outcomes. Right. And so there we see ai being used for those types of critical decisions. Thats a different kind of risk. Or you mentioned the self driving cars either there is a study that came out showing how it made a difference accuracy like hang on with the kids and other short people. Right were at risk here. So there are different ways. And also it doesnt just have to be biased ai systems act good systems can be abused if we again thinking lethal autonomous Weapons Systems you got a drone you got a camera you got facial recognition. If its might get if its not out, it might still come its still a problem. And would you support banning any types of Ai Technologies or ai powered lethal autonomous weapons and Face Surveillance right. So its not just recognition but it could be systems that are tracking your gender, your age, other characteristics. Sure. So youve been in the documentary coded bias you were the face of the decode the bias ad campaign. So from these experiences, what do you see media having in these conversations that shape Artificial Intelligence or shape how we think about Artificial Intelligence . So i saw the power of media with ai insight woman because it traveled unsurprisingly much further than my research papers. And i wanted say, okay, how do we make these findings accessible but also bring in more people into the conversation. I like to say if you have a face, you have a place in the conversation about a. I. Because it impacts all of us and so the opportunity to be part the coded bias documentary. I was a bit hesitant but then when i saw people would reach out to the algorithm Like Justice League and say, oh, im studying science because of you, i was like, okay, i got to go do my homework. But, you know, i feel inspired to, you know, that kind of they decode the bias was interesting. I was partnering with Proctor Gamble olay and they invited me be part of an algorithmic audit. I said, are you sure . Because on what i know, well probably find bias. Theyre like, thats okay. And based on who i am, id like to make the results public. Final editorial decision. They said thats i was only talking to the marketing teams. I dont know if the other teams would have then as quick to say, but long story we did that audit and we did find a bias of different types and olay committed to the consent to data promise which is the first of its type that ive seen from any company. And so showing that there are alternative ways of Building Consumer facing ai products. It was inspired by their skin promise, which i think it was the year or two after i started modeling for them, they decided theres going to be no more airbrushing or retouching truth in advertising as in which i support body. Image i think is great, but i wont lie. I was like way okay so nothing is going to save me. I was exercising and Drinking Water sleeping of course doing my skincare regime but i thought it had lessons for the Tech Industry as well. Right. When you know that theyre standards are a little bit higher you are forced to rise to the occasion. You cant improve what you arent measuring. So i think were all starting wake up to the reality that most of these ai systems, whether its a facial recognition system, an image generator, a chat bot, theyre powered our digital labor a. K. A our data. What advice would you have to legislators on data privacy and why might it not be enough if a Company Comes out and says, look, were deleting your data, its okay, its all been deleted, why might that not be enough . So . I think of this notion of deep data deletion. So when were looking at Machine Learning the type of ai approach thats powering so many, the headline a. I. Systems that youll see like chatting to you or, learning from a lot of data. So yes, the data is important, but the data is used to train a model. Right . And then this is used to be integrated into different of products. So if you do like facebook did, we deleted a billion facebook prints, which they did, theres a 650 million settlement. So there were some reasons you dont go delete a few things. And after they deleted the photos, which i commend right. It shows deletion is possible it was important to note they didnt delete the model so you still have the model that was trained on ill gotten data thats problematic so you cant just stop at the data right and then even if you delete that particular model if youve now open sourced it and its integrated into other places, it continues to travel right. The ghost of the data lives on within models and the product integration. So when i think of deep data deletion, its really tracing where the system goes and understanding the data is a seed. I think deep data deletion. Everybody remember that were starting the hashtag tomorrow at 9 a. M. , three. Okay. So in your opinion, what can be done prevent algorithmic harm . What we do . Where should we go from here . I think at the what ive learned most with my journey is storytelling matters. Our stories matter and the started with me taking the step of sharing my experience of coding in a white mask. And then that led to the research that led to the evocative artists. And here we are. Well, you know, that escalated quickly. It doesnt escalate that quickly. But stories do matter because you have to be able to name whats happening. So putting out terminology like the code is like the x code it and so forth. Its part of it. So i think thats something we can all do, which is sharing our experiences with different types of ai systems. Another piece to keep in mind is the right to refusal i see in airports all the time. Face scanning happening and oftentimes people dont know as a us citizen you have the right to opt out if you go to the tsa website, theyll tell you our tsa are trained to treat you with dignity and respect. There will be clear signage. So its like, okay, lets take it out of the i research this. Im looking for the signs. I find one in spanish lafayette does not right on my end right know i can barely see the opt out. Other people are not even looking for that sign. And in fact the algorithmic Justice League we launched the Campaign Flight i agl dot or not just so we could have the cool subdomain but it was fun but so people could actually their experiences and over 80 of the people who responded hadnt even seen those types of signs but you can say no and pushing back exercising the of refusal is really important. The other thing is the coded gaze can often be hard to see facial recognition, facial analysis is. Part of the reason i use that as an example, its so visceral, right . I dont have to write the whole research paper. You can see my friends face. You can see my face there a difference. What happened . You start the conversation, but theres so many other areas in which air is being used that you may never know. You dont get the more good you dont get the loan. And so i do think due process is important where we have a sense of what systems are being used and until thats mandated, you have to ask if your kid is flagged for some disciplinary situation. They turn out it was an algorithm involved in that you should so i would in each domain you find yourself and you might be at a in a medical facility etc. Ask them if there are any ai systems algorithms in use. They may know the answer, right. But it starts that exploration and it also start to potential story for you to share to kind of join the movement. So speaking of due process and potential pathways for litigation, the white house Just Announced their executive order on Artificial Intelligence yesterday, its supposed to be one of the most comprehensive in the world. I think we need your take on it. Are okay, are we moving in the right direction . Yeah. Not supposed to be at the white house. I came out for you guys. Its all i will say. It is definitely a step in the right direction because it lays out what needs to be done. So of course, the devils are going to be in the details for when it comes to execution. I will say that it builds on the ai bill of rights was released as a blueprint last year and there its principles based right. We want to make sure that people have protections from algorithmic discrimination. Right. That we have privacy and consent, that systems are actually safe and effective. And i think importantly, there are alternatives. So you dont, for example, have to scan your face to have to the irs, where i think it falls short and i also see Many Congress actions around ai fairness ai safety, accountability falling short. Is this notion of redress. So id love to say we figure it out, right . Were working on ai prevention and you know, but what happens . We get it wrong. And what about the people whove already been harmed . So i think redress needs to be a bigger part of the conversation. And you can start redress conversation by tracking the harms. Thats why were building this ai harms reporting platform. So we have the evidentiary record. And do you think it does enough to prevent in the first place, or is leaning on managing . Tell me. Actually, let me what what did you think . I think managing risk and managing it did a decent job at tackling and preventing it in the first place in design, how we gather data, i think that was kind of lacking. So were hitting it from the second end of the value chain or the supply chain, but from how we start, lets design things safety in mind. Lets design and gather data to avoid algorithmic harm and not wait to kind of manage. Got it. And i want to know her. Take. And my final final question before we move into poetry. What of artificial are you excited about that could help humanity excited. Interesting. Its kind of ironic to me because of this started because i was using a. I. For something fun. I wanted to create an aspire mirror. So when i go in the mirror in morning, say hello, beautiful. Or i could say, make me look like Serena Williams. Now coco garbo. Oh, the athlete you know, and it went wrong. So i. Yeah. So thats why were here. Some of the areas that excite me about a. I. , but im cautiously optimistic. At stake are its applications for health care. So i dont think it a small achievement alpha fold right to predict 200 million protein folding structures. And when i was a little girl i talk about this in the book i used to go to my dads office and feed cancer cells. Hes a professor, medicinal chemistry and computer aided drug development. So i grew up with my dad and poster shows of protein folding structures all over the world and all over our house. And yeah, he wanted me to go into chemistry, but the Silicon Computers themselves just look so cool, so i ended up going in a different direction. So that part excites me. But then i also think about so many of the disparities we have in health. Theres a company i invested in tech that focuses on womens health. One in three women die of cardiovascular disease, but less than a quarter of Research Participants are women. So if were we know about bias data sets, right . And what can go wrong. So i do think have to be really vigilant in order to realize the potential of ai in health. But its still excites me even though i did not continue the family project through their generation. But i took it a different direction. I you believed in the true ending of the protein fold, which was an algorithm. So maybe you knew all along youre your intuition was like, theres going to be a computer for this. Yeah, ill have to bring you to thanksgiving to defend honor. Well, that concludes my questions. Thank you for all of your rock star answers. And i think were really just getting started. I think now we to hear some poetry, absolu lutely. So i will go over to the over here really quickly. Mike, check. One, two. Yall can hear me. Okay. All right. So there are a lot of poems in the book, and i am going to a poem that is in part for anyone know what page . The last page part four is. Lets say poet versus goliath, the wild. Thats the fun chapter for sure. Lets see, its a long book. Page 202 to 9. Oh, wow. All right, so this poem is called the brooklyn tenants and the reason i chose it is because were here at Ford Foundation and Ford Foundation has been supporting on the front lines of justice for some time. And the brooklyn tenants follow within that tradition. I was filling out a low point in my research. Not sure if, you know, being academic was having that much impact and having the opportunity, share my research with them and see use it for their own resistance campaigns was very inspiring and led to this poem. The brooklyn tenants to the brooklyn tenants resisting and revealing lie that we must accept the surrender of our faces, the harvesting of our data, the plunder of our trace this. We celebrate your courage and no silence, no cancer, and you show the path to algorithmic justice requires a league, a sisterhood, a neighborhood book talks hallway, gatherings, sharpies and posters, coalitions, petitions, testimonies, letters, research and potlucks, dancing and music. Everyone playing a role to orchestrate change to the brooklyn tenants and Freedom Fighters around the world, persisting and prevailing against algorithms of oppression, inequality through weapons of mass destruction. We stand with you in gratitude. You demonstrate the people have a voice and a choice. When defiant melodies harmonize to elevate life, dignity and the victory ours. Thank you. And i think we have a kua so come on up. Ill be here all right feel free, doctor. Okay, give it up for dr. Joy anthony. Okay. The time is 250. Were going to do 10 minutes of q a. Please feel free to see the roving mikes with trevor and sonya and. Yeah, lets take it away. Yeah, take it. Hello. I, first of all, congratulation on the book and thank you for being here in new york. Do you mind standing up . Youre okay with that . Yeah, okay. I again, thank you for here. Since were in new york, i to ask if you had any thoughts on the action plan that the city just out the city has asked for participation stakeholders for like understanding the city should be regulating and using air. So i thought, you know, given backdrop of that, which was maybe two or three weeks ago and then the executive order yesterday, i thought maybe id love to hear what you think about what city level government should be doing in of using air responsibly or should we just be advocating that people should just not use it for government at all. Oh, thats a great question and something ive thought about quite a bit within the space of facial recognition technologies, weve seen ordinances and, different spaces where example, its probably no surprise that in cambridge and in boston and in brookline, massachusetts, the police cant use facial recognition technologies. So i certainly think what happens at the city level, municipal level matters. My concern here is you dont want to have to live in the right city to have protections. Right. And so thats where sometimes see a patchwork of frameworks. But we really do need that federal legislation that gives at least the floor of protection for everybody. So those are my initial thoughts. We got to have hands decisions. Hello and thank you. My name is andrew. Im here with the institute for advertising ethics and pmg. So thank you for what youve done. Heres my question to you since of the funding for what is ai purports to be ai is advertising money. Oh what do you think advertise officers can do with their financial willpower to push a dog in the right direction . Oh, thats. Thank you. Yeah. Wow, a great question. I will say, i think its what all companies should be doing, including those who have the advertising dollars, is to put the money towards ai systems that have been vetted and too often what well see is that you hear the promises of ai and we buy into the belief or at least the hope of it. And i think just a first step is seeing if a system is fit for purpose. And so that could be one approach. I want to see some ladies. High. Im very curious about especially right now because its such a vital for, you know, legislation and everybodys got it on the plate. But im really curious about how many social are involved in these conversations. You know, the reality is moving from human centered because human centric can mean anything to centered right in so im curious about in experience how many social scientists who really understand psychosocial wellbeing are involved in these conversations. Yeah i will say it continues to be limited, but i was so encouraged that one of the architects of the ai bill of rights, whos also now on the un, i Advisory Council is dr. Alondra nelson. And i definitely think social science sensibility is not just nice to have essential for sure, but it continues to be sorely lacking lacking. Hello and thank you for all of the work youve done. I still like you a so my question is in really some of the students in the room and of the students who couldnt be here right. Given kind of the room that were living in right now. In the event of also still layoffs, while the same companies are doing these mass layoffs, doubling down on ai systems. Right. Is just what is also being unearthed. And the biases of whos being impacted. And so kind of what are some words hope in this and especially hearing as someone who also kind of youre in this space and you didnt kind of like the traditional mainstream. Mind set that, hey, im maybe within being an academic. These are the hoops, these are the hoops that i want to jump. Yeah, but in building resistance, kind of what are some words of hope also in some of the students that you are working with. Yeah. Just who work is kind of inspiring you that. Hey, in the midst of some the the bleak yeah thats within this i feel like what are you seeing here. Got it. Well part of why i wrote unmask a because it starts with student journey when i while even know if i want to say anything because, i could get in trouble and i might want to jump one day, you know . So its all in thought. So. So seeing struggle is real and acknowledging it. And also acknowledging that there can be a cost to speaking up. But think in terms of where theres hope. I met with President Biden in june, right of this year, and i had a photo of Robert Williams and, his two young daughters. And robert was holding the first gender sage justice award. And President Bidens like is the high rate the threat of like there is hope once we start having the president asking some of these questions and that a long way off from coding in a white mass and i would say in terms of words of encouragement tech needs us its not the other way around the perspectives that you bring what you care is important. And i had to find that for myself when i first wanted to do this research. People are like, how good your math skills . Are you sure really want to tell that i had a bit of discouragement from very wellmeaning people, you know, they just had to look for me to make sure i dont get hurt and all of that right. So what helped me was having people like dr. Timnit gabriel, she was finishing up her phd while i was finishing my masters. And then youre saying that i look to who are inspiring. Deb raji reached out on facebook, right, saying, hey, i saw this can i do an internship with you . She didnt know wed be going toe to toe with amazon. So that also asked later. Quickly. But i think finding support in helping each other you know and we were there for each other as well when it got intense you might have seen some headlines and so forth. So i think thats really important and even just being proactive since aid reached out and my arms and said, i see youre doing a book tour. Im in new york, can i be part of it to us . That you want the first line you know . And so thats where thats where we are. So i have a lot of inspiration from people proactive in that way. So i mean, my name is joy, so im probably going to be generally optimistic, you know, but i wouldnt feel so discouraged because pendulum swing back and forth and ill just add its something very present in your book and its very present in your work that we have the solutions all of the problems that you discuss, you provide solutions for in the book and in work more broadly. Nothing is matter of physics that we cant solve. All of the biggest problems were facing actually have the answers. Its just executing on them and you make very clear in your book and i found that very inspiring. Thank you. Oh, wow. So many quest items. Maybe. Maybe thats perfect because i have a question around solutions. Ive been dying to ask where youre going to fall into one. Yes, we can do to me awesome. My name is alisha stewart. Im the founder of an api enabled product, helps journalists find a more representative sample of the globe on. And im really curious, dr. First of all, i have to say im so inspired by your joyful warrior mentality and the battles youve been through. Im really curious to hear a like if you were going to create a large language model today that is accurate right and is representative of the global population and will correctly identify Serena Williams. Would you think it would take and who would you call . I actually think the answer is a models. I think smaller, more focused models. And so one of the major issues, right, when youre dealing with Large Language Models with billions, trillions of tokens, is you dont have the documentation or the data provenance to have an of it. Its a bit of a mystery meat situation. And then an eight ball, shake it out, see what comes toxic. All right, lets it shake it with more focused us. I would actually think of smaller bespoke models based on the context youre looking at. So youll have a better handle on what the potential risk are, as well as tailor ing it to a specific need or a specific community. So i think its tempting to think skill scale bigger, bigger, bigger. A lot of men in this field, you know what i do think thinking through other ways is helpful here. Yeah is your question. Can you hear me okay. Hello. Thank you so much speaking and im very inspired by all the discussions had so far. I worked at mit as a engineer for a few years and now im doing research and Public Interest tech focused specifically social media platforms at the Berkman Klein center at harvard. So i think a lot of these Tech Companies are still in the process of developing these ideas items and are implementing them and using them in future is as they stand now. And i really appreciate mentioned storytelling and really contextualizing the harms that these cause and within the past few weeks there was actually a very severe bug that mehra had reported where there was a translation issue where they use ai to translate arabic text that said palestine in to palestinian terrorists and. So this caused a lot of real world harm and was just feeding into a lot of, you know, misinformation and false narratives about, you know, people that are being harmed right now in the world. And so this was kind of just brushed off and said, you know, the ai systems that were used to do these translations, theres a open problem with hallucinations. And its still theres still iterating and focused on fixing these systems. I think these Tech Companies still need to be held. So what are your thoughts while theres this, the wall is still developing and making that theres guardrails in place so that it actually get released and have these downstream effects while at the same time, understanding that theres still lot of development thats going on and that maybe will be mistakes, but it should be caught, as you know, as soon as possible. Yeah, i think about thats a great question, by the way. Think about the entire Ai Development lifecycle. Right. So you have design development, deployment, oversight and the part i would always include of redress. So ideally, youve tested as much as possible before you put in the deployment face. But if there were a consequences and penalties for making that type of egregious mistake right, i do think companies would be incentivized to be a bit more careful before you put out the mystery meat and this goes a bit to your right with large models and other types of large scale approaches to a ai where you have the nuance, you dont have the contact, and instead its reflecting what terms are more consistent. Li used with particular learning groups. I think we saw this with allo. That was a use for some sort of Rating System and if you would see the word black women, it would get a negative sentiment, not because black women stand alone. I dont think we negative. You know, i dont see it. Dont see it. But because are so many negative associations that was a pattern that was being picked up. So i do think if there whereas redress and there were consequences for making those sorts of mistakes the more costly the mistakes are the more Cautious Companies would be. All right, lets a round of applause and shiny bobble. Yeah. Thank for skipping the white house to celebrate unmasking. I know you all have a copy. A signed copy, but if you dont go to unmasking. Okay, i tell your friends, families, everyone to check it out. This is the celebration. This is the book launch today on halloween speaking. Im asking if youre going to be talking to dr. Joy. We ask that you do mask up. We do have some extra masks to the right of me. Im david polgar with all tech us humor. One of the partners for todays event alongside algorithmic Justice League who you know and Ford Foundation our beautiful host venue today also the institute for Global Politics and random house. So thank you for everyone carving out the time today to celebrate unmasking i one last time for dr

© 2024 Vimarsana

comparemela.com © 2020. All Rights Reserved.