She is the author of unmasking a. I. My mission to protect what is human in a world of machines. Her mit research on facial recognition technologies galvanized the field of ai auditing and revealed the largest racial and gender disparities in commercial products. At the time, her tedx talk over 1. 6 million views on algorithms algorand mc bias served as an early for current ai harms her writing and work been featured in publications including time magazine. She is on the inaugural list of. Time 100 ai new times, Harvard Business review and rolling stone. Dr. William sweeney is the protagonist of the emmy nominated documentary coded bias. She a rhodes scholar, World Economic young Global Leader and recipient of the technological Innovation Award from Martin Luther king jr center. Fortune magazine named the conscience of the revolution. Dr. Bowman mooney earned, a ph. D. And masters degree from mit, her masters of science. From Oxford University with distinction and a bachelors degree in Computer Science from the Georgia Institute of technology. Sinead bovell is, a futurist and the founder of way, an that prepares youth for future with advanced technologies and a focus on nontraditional and minority markets. Chanel is a regular tech commentator. Cnn talk shows and morning shows. Shes been the educator for the non by vogue magazine and to date, she has educated over 200,000 young entrepreneurs on the future of technology. Chanel is an eight time United Nations speaker. She has given formal addresses to president s, royalty and fortune 500 leaders on topics ranging from cybersecurity to Artificial Intelligence and currently serves a Strategic Advisor to the United International telecommute location union and digital inclusion. Thank you. Hello everyone. Everyone can hear me. Okay, well, we made it. Needed. Dr. Bull and weenie joy, my friend, my fellow sister. This is such an honor and i think to kick things off, you know, two terms that i think need to be a part of the everyday discourse that we all need. Understand that really stood out to me in our book. In your book, the first is the coded gaze and the second is the x coded. So what is the coded gaze and who are the x coded . Got it. Great way to kick out. Before i address that, i just want to thank all you for coming out. The first stop of the unmasking book tour, ford was the first. Ford was the First Foundation to support the algorithmic Justice League. They supported my art. Actually, agile has an exhibition piece here at the Ford Foundation gallery, so please do check out. And now to the kodak days. All right. So whos heard of the male gaze, the white gaze, postcolonial gaze. Okay, look, how did gaze extend that . And its really a question of who has the power to shape the priorities of technol g, but also the prejudices get embedded and. My experience of facing my coded gaze was what you see on the cover . It was halloween. I had a white mask around and i was working on an art project that use face tracking and didnt detect my face that well until. I put on the white mask and i was like, dang, vernon already said a black skin white mask. I just didnt think itd be so literal. And so thats what started this journey that became the algorithmic Justice League. And really we are focused on to the second term, the x, right . So those who are condemned and convicted. Otherwise exploited or excluded by algorithmic systems. And so the focus is, how do we liberate the coded how do we actually make sure that the benefits of Artificial Intelligence are for all of us, especially marginal communities and not just the privileged few . And so what are some of the ways algorithmic bias and discriminate are being a part of the excluded could be impacting all of our lives. I mean, think of a ism and its there, right . So you can think of a i deciding who gets hired, who gets fired. Amazon had a hiring tool where they that if you had a Womens College listed you got deducted. There have been other hiring tools that have been evaluated. If your names jared, you play lacrosse, you might get some points, right . So so thats one kind of an example. I also think about ai systems within medicine and. So you have these race based clinical algorithms that arent actually based on the science. People get denied vital. So thats another space which it can creep up. Education as well. You might be flagged as having used the chat bot. They show studies that actually you might be flagged not because you were cheating. English like me could be your second language. So those are some of the everyday examples in which people get x coded. And then my work has focused a lot, as you many of you know, on facial recognition technologies. So i think about people like Portia Woodruff, who was eight months pregnant when she was falsely arrested by ai powered facial recognition, skin misidentification. So sitting in a holding cell, having contractions when they finally let her out, she had to be rushed to the emergency room. Right. So thats the type of algorithmic discrimination, putting two lives in danger. We could go on. Its a horror story. Its halloween and there are some profound example. More examples in the book from a driverless vehicle. Not maybe not seen you. The list goes on and on and my jaw just dropped. Every one that i read. So in the book you talk about your viral tedx talk and if you havent seen it, i highly recommend it and you also discuss some the comments that you received. One such comment was algorithm are math and math isnt biased so can Artificial Intelligence ever just be a neutral objective tool thats great question. And ive had so many ai rolls, even one of the book reviews was like, youre telling me cpus computers racist. So how can this happen right. And in fact, i got into Computer Science because as clueless people are people are messy sighs hoping i could be in the abstract world and not really have to think too much about bias. But when we look at Artificial Intelligence and particularly Machine Learning approaches that are powering many of the systems were seeing today, the machines are learning from data. The data reflecting past decision. Right. And we know lets the gatekeepers for who gets hired might not be so inclusive. And so thats where the bias starts to come in when you have systems that are looking for patterns and patterns reflect a society. So im not saying one plus one doesnt equal what you think it was going to equal, but im saying once were applying these types systems to human decision making, the bias creeps in. Right . And i think that is something that we hear often that technology is just a neutral tool and its up to us for how we get how we use it. But you make a really important in your book that there are decisions that get made prior to the technology, even being deployed. And those decisions very nature of doing things like classifying people cant be neutral. And i think, yeah, that was a section that really stood out me and i want to read a quote from your book and this quote gave me chills. So i thought that this would be the appropriate section to read out. So seeing the faces of women i admired and respected next to labels containing wildly incorrect descriptions like clean shaven adult man was a different experience. I kept shaking my head as i read over the results, feeling embarrassed that my personal icons were being classified in this manner by ai. When i saw Serena Williams labeled male, i recalled the questions my own gender. When i was a child, when i saw an image of school age, Michelle Obama labeled with the description toupee. I thought about the harsh chemicals put on my head to straighten my kinky curls and seen the image of a young oprah labeled with no face detected. Took me back to my white mask experience. You went on to say, i want people to see it means when systems from tech giants box us into stereotypes, we hoped to transcend with algorithms. So how you called attention to these specific stereotypes was through a poem you wrote called i aint i . Woman can you tell us more about this poem and what it means to a poet of code . Oh wow. That gave me chills. Yes, reliving it kids are mean out there. Id always be asked you a boy or girl when i was growing. So i think its somehow ironic that this ends up being my research. So after i did the gender shades at mit, where i was doing my masters degree and the results were published, the results had performance showing, okay, for ibm, for microsoft and then later on for amazon. These systems work better on mens faces versus womens faces on a lighter faces versus darker faces. And then we did a little intersectional analysis. So extremely we saw that it didnt work as well on the faces of dark skinned women like me. And so when i observed that from the data, i wanted to move from performance metrics, Performance Arts to actually humanize what it means to, see those types of labels. And thats what led to a i tired woman. At first i thought it would be an explainer video like ive done with other projects. And then i was talking to a friend and they said, can you describe what it felt like. And as i started to describe it. He said that, sounds like a poem. So the next morning i woke up with these words in my head, my heart smiles as i bask in their legacies, knowing their have altered many destinies in her eyes. I see my mothers poise in her face. I glimpse of my aunts grace. I was like, oh, was happening right. So i cant. I kept going on right. Can machines ever see my queens as view them . Can machines ever see our grandmothers as we knew . And the descriptions you just shared . Right. So to see Sojourner Truth labeled clean adult male, those are the queens i was talking about. And that led to what my ph. D. Ended up focusing on, which was both algorithmic audits like the gender shades paper, which showed performance metrics, but also evocative audits, like 80 woman, which humanizes what ai harms look like or feel like. I love that you use that word to humanize this. So when you decided to pursue algorithmic bias as the focus of your research, this was 2016. It wasnt a topic many had heard of, and it certainly wasnt really discussed in the public. And then your work it courageous early takes on big tech or some of the tech giants are calling attention to some of the harms in their facial recognition systems. Some of the companies lashed out at you and some did come to your defense like dr. Timnit gebru, someone that we also all adore and love and shout out, shout out. Tell me. But others were fearful to come to your defense, as were some of the academic labs, because they feared it would impact their ability to, get funding or to get a job. So as a student pioneer green, in this research, how did you navigate and in your opinion has the sentiment shifted or the fears over career repercussions still hinder open . Discussions about ethics . This is such a great question, will say now that i lead an organization i have more for administration right and keeping things funded and all of that at the time as a grad student i felt that timnit gebru, deborah raji and i, we were really sticking our necks out and i couldnt understand why more scholars and werent speaking up as much until i started to follow the money trails. So many of these Large Tech Companies fund many Computer Science degree programs, particularly phds. I happen be in a place where my advisor didnt have a ph. D. He was on a nontraditional path. I had aspirations of being a poet. So all of these things helped me not feel so much that if if i poked the dragons, they were fire breathing dragons, i would be completely eviscerated so i do think there is still a fear of speaking out. I do think the work of gender shades helped normalize this conversation so others could speak gender shades. One of the things i did, which i was cautioned, was actually naming the companies usually its a company, a company, b company, c, keeping my funding like this. Good, right . So to name it. But now this is a common practice. And i also have to commend the senior academic who did come to our defense. And later on i did hear there was a cost to doing it as well yeah, i think you i think the research with gender shaded it gives us data to point to and determine ology that we all need when we want to advocate against some of these harms. So i have to ask it. There are many voices in the world of ai who believe that superintelligence and the potential for ai to cause humanity to go extinct. Those are the most important harms we should be paying attention to. So as someone who has dedicated their entire working life to combat in harms and i are these the real risks we should be tuning in to x risk. When i think of x risk, i think of the x coded. So i think about the person who never gets the callback and you explain what x risk is to people that oh sure exist that you want me to talk about what the doomsayers know, just explain the experts. Is just the existential risk kind of thing. Sure. So youve seen terminator. Yeah. People on the internet, youve seen the headlines. Its the another world as we know it is here. Were going to die. Thats exodus. So i could become so intelligent. It takes over the already powerful and they become marginalized. This is my take on the x risk. They become marginalized and would be terrible. The face of oppression, wouldnt it . Right. So this is x risk as i see a and i what i notice with doing this work since 2006 18 is sometimes there are intellectually interested in conversations that happen within theoretical spaces. Right. So what if and then we continue from that. What if . So we have that with what if ai systems become sentient, which theyre not. Right. What those general intelligence look like. And i think sometimes can be a runaway narrative that is fictional, which doesnt reflect reality but gets a lot of attention. And the problem with getting so much attention is it actually impairs the agenda for aware funding and resources are going to go. So instead of seeing what we can do to help Portia Woodruff or Robert Williams falsely arrested in front of his two young daughters, the money goes elsewhere. So thats kind of the danger that i see. I think its one thing to have an interesting intellectual converse asian, but thats not necessary only whats going to help people the here and now i mean like in the book how label theres hypothetical and then theres real risk that exists today. And one more thing i wanted to talk right. Ive supported the campaign to end killer robots. Ai systems can kill us slowly. So thinking of structural violence, its not the acute, you know, the bombs drop or the bullet is shot. Its when you dont have access to care, when you live in environments or Housing Conditions that worsen your life outcomes. Right. And so there we see ai being used for those types of critical decisions. Thats a different kind of risk. Or you mentioned the self driving cars either there is a study that came out showing how it made a difference accuracy like hang on with the kids and other short people. Right were at risk here. So there are different ways. And also it doesnt just have to be biased ai systems act good systems can be abused if we again thinking lethal autonomous Weapons Systems you got a drone you got a camera you got facial recognition. If its might get if its not out, it might still come its still a problem. And would you support banning any types of Ai Technologies or ai powered lethal autonomous weapons and Face Surveillance right. So its not just recognition but it could be systems that are tracking your gender, your age, other characteristics. Sure. So youve been in the documentary coded bias you were the face of the decode the bias ad campaign. So from these experiences, what do you see media having in these conversations that shape Artificial Intelligence or shape how we think about Artificial Intelligence . So i saw the power of media with ai insight woman because it traveled unsurprisingly much further than my research papers. And i wanted say, okay, how do we make these findings accessible but also bring in more people into the conversation. I like to say if you have a face, you have a place in the conversation about a. I. Because it impacts all of us and so the opportunity to be part the coded bias documentary. I was a bit hesitant but then when i saw people would reach out to the algorithm Like Justice League and say, oh, im studying science because of you, i was like, okay, i got to go do my homework. But, you know, i feel inspired to, you know, that kind of they decode the bias was interesting. I was partnering with Proctor Gamble olay and they invited me be part of an algorithmic audit. I said, are you sure . Because on what i know, well probably find bias. Theyre like, thats okay. And based on who i am, id like to make the results public. Final editorial decision. They said thats i was only talking to the marketing teams. I dont know if the other teams would have then as quick to say, but long story we did that audit and we did find a bias of different types and olay committed to the consent to data promise which is the first of its type that ive seen from any company. And so showing that there are alternative ways of Building Consumer facing ai products. It was inspired by their skin promise, which i think it was the year or two after i started modeling for them, they decided theres going to be no mor