Race has affected development. Its 5 00 on the dot and i want to give our last speakers every single time as possible. Here with a keynote on Artificial Intelligence and the new jim code professors Ruha Benjamin and meredith broussard. And with that im going to turn over the floor to both of you. Thank you. Thank you. Thank you charlton and thank you everybody for coming today. This has been a really a really stimulating intellectually stimulating day. Its my great pleasure to be here today with dr. Ruha benjamin, an associate professor of africanamerican studies. She is the author of science and rights on the stem cell frontier and a new book called race after technology, Race Technology for the new jim code. Its available for preorder now and coming out in early june. She is also editor of a new book called, captivating technology, race, car serial tech nowhere science and liberty imagination in every day life, which comes out this week. Its my great pleasure to be on stage with ruha. And i get to introduce professor broussard, but i have to say as i introduce her that unfortunately her book is already dated. I would love to do a plug. But unfortunately she is now an associate professor of journalism at nyu. We need a naomi wolf home and recall the become. Professor browse have had a data journalist at the appropriating you are L Carter Journalism Institute and author of artificial unintelligence how computers misunderstanding the world. Her research focusing on ai and investigative reporting with a particular interest in using Data Analysis for social good. A former features editor Philadelphia Inquirer also worked as a Software Developer at a and t bell labs and iit media lab. With that i get started with the first question. One of the things really striking about artificial unintelligence is that balance between tech enthusiasm and tech skepticism. There is a line i love, many in the book that you have this this balance. And so you say as a car the tesla is amazing. As an autonomous vehicle, i am skeptical. And so tell us a bit about how you balance this together, someone who writes codes but also critically engaged with this entire tech industry. How that works together for you in your analysis. Well, so actually opened the book with a story related to this. I hope up with a story about a time when i was little. And i tried to build a robot. And i had this robot building kit. And it had a little motor and i plugged it in and it didnt work. So this was my introduction to the fact that we can build things with technology. And what we imagine the technology is going to do is not necessarily the same as what the Technology Actually does. So there is a really big gap between our imagination and reality. And so i took this knowledge into my work as a developer because often i would write code and i would imagine that the code do something. But once i implemented it it couldnt actually live up to my expectations. So this is something we really need to confront as people who make technology is we need to not get out over our own skis. I think in the case of Autonomous Vehicles, which i write a lot about in the book because i think this is a really important technological issue as well as a social justice issue. The makers of Autonomous Vehicles are really invested in the fantasy of autonomous cars. And theyre not giving enough enough attention to the often disappointing realities of them. And so it is absolutely possible to keep Building Technology in the face of constant disappointment. I mean, this is actually how you build code. You have to be really you have to be okay with constant failure, basically. Because nothing ever works the first time you build it and you have to bang your head against the wall and it feels amazing once you stop. But this is this gap between imagination and reality is something that we really need to be more honest with ourselves about when were thinking about technology and society. So, ruha, i want to ask you about your work. So your sociological work focuses on science, medicine and technology. And how did you come to this interest . And how did these fields intersect . So the short answer the kind of personal answer is that when someone tells me not to do something it makes me just want to do it. And so when it comes to all of the things that sociologists study i found as an undergrad that there was a kind of black box, an exceptionalism around science and technology that didnt necessarily pertain to other fields. Whereas someone studying economics or politics people dont stop them and say well were you a politician before this . Or were you an economist . So the assumption is you can have some kind of access to this arena without being someone trained in that arena which we dont grant to science and technology. So i was interested in that exceptionalism and breaking through that, sort of piercing that bubble. And then within that as i moved further into the social studies of science and technology. I found there were lots of people doing it. But often times the way that Research Arena was framed was looking at the social impacts of science and technology. So the science and technology was a given. And then we wanted to study how it was impacting different communities. And i became really interested in the social inputs of science and technology. That is the way that our social order our assumptions, norms and biases are embedded in the things we take for granted. Widening narrative and not starting with the science and technology as a given, as inevitable and then studying how it impacts society, but looking upstream as how society actually shapes the technology that were taught is inevitable. This idea of inevitability, i think, is really interesting. And one of the really exciting things about readying your new book was realizing we were talking about so many of the same things. And we were i wish we had been in conversation i wish we had this conversation like three years ago. I know. But i will Say Something about the book, is that i do see both of our books and this field the books as provocations, as conversation starters, not trying to close up or sort of tie a nice bow around it. In some ways im glad its sparking a conversation after the fact. Very much so. Very much so. And this this feeling of inevitability i think is really interesting. Because my sense is that people feel disempowered in the face of technology. Theres been this narrative that tech is something that we have to sit back and let it happen to us. That we dont have agency in deciding what our the technologies that get developed or what are the interventions or the invasions in our lives . And i think there is two sides to the tech know determinism. One is a kind of fatalistic technology is going to destroy and devour us. But the other side is tech know deterministic which is that its going to save us. Whether we think of it as saving or slaying, both are deterministic ways of thinking that robs us of our agency. Thinking about the crafting of your own book one of the things i love is the way you bring together critical analysis with storytelling. There was a panel or two ago where a lot of the conversation was about the role of storytelling in both reinforcing certain kinds of hierarchies and inevitabilities, but also as a tool for subverting them. I wanted you to take us behind the scene in terms of how you think about storytelling in the crafting of this analysis. I get to talk about literary craft. This is very exciting. I come from a Literary School called immersion journalism which derives from the new journalism of the 1960s and heavily influenced by participant observation. So as an immersion journalist you immerse yourself in a world in order to show your readers what that world is like. So i do a kind of immersion journalism for technology, whereas a data and computation journalist, i write code in order to commit acts of Investigative Journalism and often build code in order no demonstrate something about how technology works, or how particular technological interventions work. So in a couple of episodes in the book i build Campaign Finance software to try and fix the Campaign Finance system, which p. S. , is really broken and it didnt work. I also built Artificial Intelligence software to try and fix a particular problem in philadelphias public schools. Public schools in philadelphia did not have enough books to for the students to learn the material that was on the state mandated standardized tests. And nobody had ever asked the question before of do the schools have enough books . So i tried to answer that question, but i couldnt because the software to answer that question didnt exist. And you cant actually do the calculation in your head because its just like too hideously complicated. And so the process of building that technology was really illuminating in understanding the role of data in that system, and then also understanding why its so hard for kids to succeed inside resourcestarved school districts. And so its not really a data problem. Its a people problem. So by Building Technology and talking about how i built the technology was a way of accessing kind of larger social issues around around what are we doing to kids in public schools. And one of the stories you tell, you kind of hinted to a few minutes ago, took place in 2007. This kind of harrowing story of you riding in a driverless car. One of the stories we tell about tech is just give it time, right. Well fix the bugs. And so its been 12ish years. Where are we with that . Would you say were at a place its reliable and safe enough for wide use. Not even vaguely. So in 2007 i first rode in a driverless car. And it almost killed me. And it was in an empty parking lot with no traffic. The technology has come a long way since then. But it has not come as far as the marketers would like you to believe. Next time you hear somebody saying about Driverless Cars are five years away. Think about how many times you heard that. And how long people have been saying that theyre five years away. Because theyve been saying it actually since at least the early 90s. And so they are not actually five years away. They do not work as well as you imagine. And, in fact, one of the one of the things that ive been increasingly worried about with Driverless Cars comes from something you allude to in your book which is the inability of systems to see darker skin. So Image Recognition systems, facial recognition systems, object detection systems, they dont see first of all, they dont see the way that a human being does. The other thing is they dont detect people with darker skin as well as they detect people with lighter skin. And so the systems that are embedded in these twoton killing machines are not detecting people with darker skin. And i would i would pose the question, who exactly do we think is going to be killed by selfdriving cars on the street . And so you go into this a little bit when you talk about the racist soap dispenser. Are you all familiar with the online video, viral video of the racist soap dispenser. There is a man with light skin, a man with darker skin who puts his hand under the soap dispenser and it doesnt work. Can you tell us about how this artifact functions in your work . Sure. One of the things i try to do in that section is bring together something that seems kind of trivial. In fact, the people displaying this on youtube they are giggling through the clip if you recall. It seems funny, entertaining, doesnt seem that consequential when you just read it as a glitch of one particular technology. But we also have to think about how we got to that point, what are the larger structural aspects of the research and Development Process that would make it such that that would happen . And those same context clues are also operating behind much more consequential kinds of technology that are making life and death decisions about people. Whether its the risk scores we heard about earlier in terms of whether to parole someone or not. Whether decisions in health care, education, whether someone should get a loan that they really need. So the same type of questions about how algorithms or in this case Automated Systems are developing, who they are seeing or not, a set of questions that i think we can apply beyond any particular type of technology. At the same time i do think making distinctions are important. So im juggling the attempt to make important decisions a distinction. Rather than just talk about racist robots which is a great headline, which is how i sort of became interested in this, is the headlines about, oh the Automated Systems are racist to think about how are they racist . What do we mean by racism we have a strum of technologies explicitly attempting to reinforce hierarchies. They are not hiding it. So we have to make a distinction on that end of the spectrum of technologies that are designed to and are reinforcing longstanding racial caste, class, gender hierarchies. And think about them in relation to and this is what interests me most. Technologies that are being developed to bypass bias, sold to us as actual fixes for human biases. And so thinking about those as a kind of technological benevolence. And so tech fixes that are saying you know, we humans are messed pup the judges, the prosecutors, the police, the teachers are racist. Here is an app for that, an algorithm for that. We need more attention around those things that are purportedly here to do good, to help us address inequity through a particular technology. We have to keep an eye on the obviously racist robots, right . But i think we need more attention around the promised dogooding of so Much Technology and the way that we turn to it in lieu of more transformative processes that i think we should be investing both imagination into and other Economic Resources into. And so the system of the way that racism is embedded in everyday technologies is, i think, what you are referring to when you call it the new jim code, which of course is taken from Michelle Alexanders notion of the new jim crow. Yes. Can you unpack for us what what you mean by the new jim code and whats going on in these systems . Absolutely. And this is my attempt to remind us about the histories that are embedded in technology. So by invoking this term that was first a kind of folk term that developed to name a broader and broader set of practices around White Supremacy that then got resuscitated through the book, the new jim crow to describe how mass incarceration has a license to legally discriminate and reinforce caste hierarchies, extending slavery and jim crow into the present for me the current millieu is not simply the next step. Its not that were over old school segregation. Were not over mass incarceration. But its thinking about now techno scientific fixes create new forms of containment. And how this history is important to understanding the impact of the technologies. So its a combination of coded inequities, often more subtle than past regimes of racial hierarchy. But also the key to the new jim code, the distinction really is that its purportedly more objective and neutral than the prior regimes. And thats where the power and problematic of this regime takes place, is that we put our guard down is, because were promised that its more neutral and objective. And precisely when we put our guard down is when our antennae should go up to some ways to thinking critically about this. And this came up a number of times in the previous panels. I think there is a heightening consciousness around really being aware of anything that is is promised to be more objective and neutral. And so what we see here is a whole set of practices under that umbrella. And the notion of objectivity and neutrality and the idea that a machine could be free of bias is something that i really i really grappled with a lot as i was as i was writing my book. Because as a computer scientist, yeah, when youre doing math ultimately when do Computer Science you are doing math. Because computers any literally compute. Which we kind of forget about. Really its just a machine doing math and ultimately all the fancy stuff we do with computers comes down to math. And so yeah when you do a math equation its just an equation. But one of the things that i think is so interesting that has happened among the math and Computer Science community is they got really carried away with the idea that math was superior. Yeah. So this is this is the basis for an idea that i call techno chaufism not quite technological determinism but techno chauvinism, which is the idea that technology is superior. That Technical Solutions are superior to Human Solutions and it derives from the idea that math is superior to say the humanities or social sciences or any form of intellectual inquiry. And then when we look at the people who for centuries have said, math is so superior, as we as mathematicians dont have to worry about pesky social issues because our work is so high falluting and important they are so homogeneous look and they policed the boundaries of the profession for century so that women and people of color for example are excluded. So its i mean its not necessarily that computer scientists are out there saying i want to go make racist technology. Yeah. I dont actually know any computer scientists doing that. Thank god. But its the unconscious biases of a very homogenuous group of people that manifest that way. And they think its just math, just computing and going to be superior to all the pesky social issues. And its not true. One of the things that i get from wrestling with techno chauvinism as you describe it but its not limited to this but the conversation about equity and justice is abrupt up, the default intervention is to look for a mathematical way to define fairness and equity. Can you say a little bit about i presume and seen that computer scientists are forced to grapple with questions of equity and fairness. How are they doing that . What do you think about how theyre sort of domesticating the critique we are raising. Im glad you asked that. Im so excited there is a discussion around fairness and ethics and transparency in math and Computer Science. Its thrilling the question is happening. The and the f. A. T. Star conference is doing some really interesting work. One of the fundamental problems, though, is that not everything in the world can be explained mathematically. And those okay. But thats really hard for a lot of people to hear because theyre really invested in the idea that math is superior and that mathematical solutions will also make all the problems disappear. But when i think about fairness and i think about the social and mathematical definitions or dimensions of fairness. Lets say you have a cookie. And when i was a kid and had to share a cookie. Say there was only one left. Obviously the fair thing to do is to break the cookie in half and each kid gets 50 of the cookie. So the computer if the computer were making a decision about fairness it would split the cookie down the middle and each child gets 50 and the problem is solved. But when you actually break a cookie there is a big half and little half, right. And so you have to negotiate the social issue of who gets the bigger half. And so if i told my little brother, well i want the big of half of the cookie and you take the small half but if you take the little half you can pick the show we watch on tv after dinner he would like okay thats fair. So we have a negotiated social definition of fairness which works also. Its a totally reasonable reasonable transaction. So the computational and mathematical definitions of fairness dont always sync with social definitions of fairness. And so i think if we are trying to create purely mathematical definitions of fairness in every context, we are simply going to fail because thats not how society works. And this relates to one of the sort of more striking critiques that comes up again and again in the book. And this is the fact that you you completely reject the idea that machines can learn. And so i love there is moments where meredith articulates some common sense understanding of technology and then just says wrong. Thats not it. I love it. Its like, doesnt involve its just wrong. And the clarity is so refreshing. But this particular idea that you say machines cant learn. Tell us how come. This is a really interesting feature of language. So i im really interested in the in obviously like im really interested in Building Code but also interested in the nuances of language as a writer and artist. And im interested in cognition. And so when we say Machine Learning, it sounds like there is a little brain in the computer. Because learning is something that we attribute to sentient life forms. And the term Machine Learning was deliberately chosen because the people making the mathematical discipline that we call Machine Learning are people invested in Science Fiction and really invested in making Science Fiction real. I love Science Fiction. I think its really fun. But again we have to draw a clear line between reality and fantasy. The habit that people have gotten into of naming things after their favorite imaginary object really muddies the issue. Artificial intelligence makes it sounds like there is a little brain inside the computer. One time i was i was gifting a demo of Artificial Intelligence software as a science fair for grownups and a an undergrd came over and said ai system. Is real ai. And he starts looking under the table. Like as if there is some little person. And i realized he thought i meant that ai was something organic. And that they had broad Science Fiction to life finally. Because this is something that a lot of people are invested in doing. But i would encourage people to just give up on the idea of of general ai. Wrong well, i mean its weve tried it for a really long time and it hasnt worked. And there are much more practical ways of making something in your own image. People are invested in this idea of making like an artifact that thinks and like does what you want. And you know is this thing thats going to live on after you. And i feel like if you want something if you want to make something in your own image, like have a baby. It works. All right. How are we on time . Okay. Were okay on time. Let me flip over to something about babies. You write about dna. Dna technologies. And you connect this to eugenics. Can you tell us about the new jim code, about ideas, eugenics and how this is manifesting in todays technology. Yeah, sure. So my previous work was around the Life Sciences regenerative medicine. And i have the ongoing interest in the social dimensions of genetics and stem cell rerch. It doesnt take up much of the book but im trying to make a connection between the Genetic Technologies and the questions scholars and analysts and activists have been acting a long time around the role of genetics and reinforcing ideas of race and racial hierarchies. And take that into now for the realm of the data sciences. One of the things i love about the set of panels and the ongoing conversation is people marking the fact that what were talking about isnt all that new. There are underlying themes, concerns, fault lines, politics, that are taking a slightly different iteration in the context of algorithms and Computer Science. But often times, the people just starting to think about it are failing to reference or draw upon this deeper body of knowledge of people who have been thinking about this from other areas. In one particular section here im looking at the way genetic technologists are incorporated in the incarceration system and criminal justice, looking at the kinds of objectivity that are placed on the shoulders of genetic as this arbiter of truth, whether the arbiter of truth about the identity of someone their racial group. Whether its the way in which genetic fingerprinting is used to create profiles of people based on a whole host of assumptions that actually dont hold up under scrutiny. And so its a similar kind of phenomenon in which we are imbuing technology within a whole host of assumptions around objectivity and neutrality but not stopping there. We enroll it in already inequitable systems to make very important decision that is then go unquestioned because, we say, well the science says, well the test says, without thinking about who designed that, what assumptions, what biases, what data set, with what training data. And so Genetic Technologies are part of in larger suite of technologies that we are using to make really important were outsourcing decisions to the technologies because we think that theyre more objective when we should be really concerned in part because of this longer history and relationship between the Life Sciences and eugenics. And i want to talk a little bit about more about this notion of objectivity or impartiality. And i want to read all of you a really terrific passage from the book. Ruha writes the danger of new jim code impartiality is the neglect of ongoing inequity perpetuated by color blind designs. And as i read this i keep finding myself wanting to do air quotes in this context algorithms may not be just a veneer covering historical fault lines. They also seem to be streamlining discrimination, making it easier to sift, sort and justify why tomorrows workforce continues to be socially stratified. Allege rimming neutrality reproduces algorithmically sustained discrimination. And i think its interesting. I like you said you have the urge to put air quotes or scare quotes. Especially with respect to discussions of race we kind of reserve our scare quotes for racial discussions when we dont do that with so many other areas that are as socially constructed and politically constructive, right. But its this kind of urge to say to create a distance from it. And i would, you know we dont say that money that you owe me. Socially constructed, right. That political decision. We dont use scare quotes for all these other arenas. But with race we do. And so i like that early on in one of the panels we were talking about racial realism to say that although its socially constructed its powerful in its impacts which means we dont need scare quotes, that its reality is in the impact it has on peoples lives, to the extent that people are dying because of this reality. So we dont need to set it apart as somehow uniquely socially constructed when its related to all the other areas. So that is just a kind of a basic, i think, thing for us to begin questioning, to instead of creating distance, say this is part of our lived reality that we have to wrestle with. How about we do scare quotes around algorithmic neutrality. Yeah. Another thing that i think is fantastic you talk about in the book is the idea of defaults in the system, and how the systems default to discrimination. Can you tell us more about that . Thats my way of trying to think about business as usual. Like if we just inherit a certain process or technology or practice and we dont do anything differently, we just follow the instructions, just clock in or out, right, the default settings of our society is White Supremacy, right, among other things. And so just doing business as usual, you dont have to have any animus in your heart. Just clock in or out do it how its always been done and you reproduce the default, which is an urge for us to actually exercise that agency we started with, exercise that kind of latent power we are taught we dont have. And begin to question the default settings, begin to rather than code switch to fit into existing systems to rewrite the underlying code, the norms, the values, the discriminatory practices that we inherit. And so when we think about default, its not just the technology that has default settings. Its also our social milieu has default settings. This is a way to set technology apart as somehow removed from society. Its with us. Its part of us. And so the things that we take for granted in our everyday social interactions, those are also places that we can begin to exercise different kinds of practices that engender equity. We dont necessarily need a tech fix for that. I was thinking back to the conversation around Voter Registration and thinking about the question the moderator asked about, you know, maybe we could Harness Technology to engender greater Voter Registration. And there are Real Old School ways to do that too that doesnt involve technology. Like why are we voting on a week day . There are so many different ways in which we structured the default setting of just voting that we could change yesterday that could spike the number of voters that doesnt involve new gadgets, new apps, new anything. So its a decision not to do that, right. Its a political decision not to do that. And so, again, its thinking about when we are offered some new thing for that, i wonder if we can just do that through other means that dont require fancy new shiny gadgets which gets to my last question for you which is kind of the marching orders in this book. That you say, we need to actually reorient what we think how we think of technology away from shiny new to the everyday. Why did you end up on that as your the kind of final reorientation for the reader . I really care about empowering people around technology. So often, especially in Artificial Intelligence, people talk about ai but dont really understand what ai is and isnt. And therefore we make we make ourselves susceptible to people who are telling us things that will lead us to poor decisions. Right . So i really want people to understand how ai and how Machine Learning works, what Artificial Intelligence is, and so one of the things i do in the book is i actually demonstrate what it looks like when you do Machine Learning. Because most people have never seen that. Right. Like you probably heard the term Machine Learning. But unless youve sat down and looked at whether it looks like when somebody does Machine Learning, its still kind of feeling like its magic. And its confusing. So i want to demystify things. And i also want to well i want to go back to what you said about defaults. Because that was that was really interesting. And i think about defaults a lot as a programmer. Because when you are writing code you need to figure out what is your base case. You need to figure out if interests have no value for a variable what do i put in as the default . And the easy thing to do when you write code about, you know, about social issues is say, okay well the default is just going to be what its always been. And i think about defaults a lot as a programmer. Because when you are writing code you need to figure out what is your base case. You need to figure out if interests have no value for a variable what do i put in as the default . And as a black woman i look at the world and i see racism. I see sexism. And so im aware that those are the defaults. Yeah. In the world. When i write code, i try to do it with an awareness of the defaults of the world. And some of those defaults, like, the programmatic issue hasnt been solved. We havent solved racism in the real world. So there is no fix for racism in the code. Because the real world fixes have to come before the computational fixes. Because this is not an equation. The pythagorean functions are different. We all need to be on the same page first. And so we need to understand what were talking about. And so we need to stop having conversations where one person is talking about, like, the robot apocalypse and another person is talking about Risk Assessment scores. And like when we talk about Artificial Intelligence we need to be talking about the same thing so we can have a productive conversation. And there is a really sorry. There is a really beautiful sort of strand through the book where professor broussard is drawing people in and will sort of draw people in and demystify to make it a wonderful addition to any classroom. Thank you. Do people often ask you what can we do . I get this question a lot, as if there is kind of one thing that, you know, that an audience can take away, or one thing we can all go out and do. And if we did that one thing wed solve all the problems. And i i dont know, sometimes i give an answer. And im like this is the one thing he should do. Then other times i push back a bit and say well the notion that there is just one thing that we could do is actually part of the problem because the notion that there is one thing is actually a notion that comes from tech culture where you boil down a problem to the pain point and you articulate what the pain point is and then you write code against it. And then its fixed. So do people often ask you, what can we do . And what do you say . Yeah, you know, my thinking around this is we have gotten to this point through many different avenues, through topdown, bottom up, horizontal, that is every tentacle of social life reproduces inequity and injustice, which to me means that what justice and equity looks like is a challenging of every single tentacle. That is, no matter what we do we have an opportunity to try to redress and to engage with the problem. So its definitely not a single fix. An umbrella term that i use in the book to try to articulate a move against the new jim code is to think about what abolitionist tools look like. And tools not simply in the techie sense but as techniques and practices. And so while the strand of abolitionist tools runs throughout the text. The last chapter really grapples was a range of tools. Community based organizations. Organizations trying to engage communities to engage critically that engage all the technology that shape our lives, for the roles of artists that are creating the harm and promise of technology. Things like the white collar Early Warning system that takes the idea of predictive policing and turns it on its head. And heat maps financial crimes all over the country where we end up with hot red on wall street, you know probably were in a hot red zone right now. They created an app that will ping you, warn you when you entered a white a White Collar Crime zone. And theyve averaged the likely face of a criminal using the linkedin criminal of 6,000 executives. And so taking that kind of idea that derrick bell articulated about around critical race theory. And say sometimes the best way to know what were dealing with is to reis verse reality create a racial reversal to see the absurdity of a particular reality through its opposite, right. And so in this case we chuckle and laugh at white collar early crime warning when those same techniques are being weaponized against black and latino communities. And then you have artists but you have a whole set of abolitionist technique and tools that we dont need to lay out publicly. That people are using to survive digital dragnets, right, that by writing about it and exposing it you are actually giving that information to people who can then harm, right. And so there are a whole host of abolitionist tools that we need to know exist and are being developed. There is a digital defense playbook by our bodies that collective. And so here one of the things we have to think about is what we need to survive and to thrive that doesnt necessarily need to be blasted on cspan. The overground railroad, as douglass pointed out. And so there are many things happening in locales and organizations, individuals that we could put under this umbrella of abolitionist tools. But for me it was important to try to begin articulating and be in conversation with people who are seeking something more transformative than another tech fix, a solutionism a reform that will make something more marketable, make something, you know, more ethical in a superficial sense without challenging the underlying inequality or injustice that is killing people. And so for me that is a concept that can be in conversation with other cousin concepts and ideas of people working to transform that larger milieu that you say needs to be changed before we can see it in the code. And i feel like one of the one of the abolitionist tools you use in your work is weaving in your own personal story as a jumping off point for talking about larger social issues. Is that something thats going on . I mean, you know, thats part of the process of undisciplining myself, is again, if we think about the feigned objectivity, the god trick, what haraway would call the view from nowhere where data tells us we are sitting above it all and making judgments. Then the opposite of that is to situate yourself, to show from where you look, to show why you care about things. And so in terms of our knowledge production if what we are critiquing is the god trick, the view from nowhere, then you need to tell us where youre looking from, right. And so throughout it doesnt overpower the techs but you get to know me, because otherwise how can you evaluate my analysis if you dont know whats motivating it, right . This is a way even how we write and how we talk is part of the transformation that oftentimes our disciplines do not condone. And my favorite my favorite way that you employ this technique in the book is in the introduction where you write about how your personal experience with Police Surveillance in los angeles influences your understanding of how technology perpetuates racism. And i feel like this is really important because in immersion journalism we talk about whether or not you put yourself into the work as a character. And so the rule of thumb is you dont put yourself in unless you absolutely have to be there. If you cant tell the story another way or if your subjective experience is is really meaningful to the way that you tell the story. And i felt like your subjective experience was crucial to how this story is told, to how the issues you explore are set up in the world. Yeah. And to know what she means youll have to get the book. Just kidding its just the preface. And its terrific. You should definitely look for this section. And a experience shared by many people of growing up in an overpoliced or a heavily policed neighborhood, right, where you grow up seeing your friends lined up against the fence on the way to school, getting patted down. Helicopters rumbling over the roof. This is los angeles. Its growing up with this as the normal way of thinking about what police are. And thinking about now how you dont necessarily need the audibile rumble of helicopters to surveil communities. And so what what does that transition from the more explicit obvious that you can point out, right, to the the much more embedded. And so this is where organizations like stop lapd spying come in. Where they think about the more invisible forms of surveillance that nevertheless harm communities but we need we need more sort of we need to highlight it more. And thats what the tech is trying to do. I thought we could finish by talking about organizations who are doing good work in this vain. Because i like to i like to uplift some of the organizations and individuals out there who are doing really amazing work in destroying the new jim code. So the first one i would offer is data for black lives. Which is a policy group. And they are they have a really great conference every year. Oshima, brent miller is a power house. And they are doing interesting work on nutrition labeling for algorithms and we can find the impacts on our lives. What are some of the groups. I have a whole list on the resources page on my website. But one of the sort of heartening developments over the last couple of years is the growing movement from within the tech industry. So tech wont build it is more than code. And we begin to break down the binaries between outsiders and insiders. And one of the things i attribute that to is those people must have taken a good sociology class in college. We cant rely only on that because one of the questions i have as that movement has grownup around, you know, opposing i. C. E. , posing various military contracts is the lack of movement when it comes to the vast net around black and latino communities before this particular iteration. So the use of surveillance technologies going back. We just cant wait for tech insiders to grow a conscience is growing. Theres various moments i list on the resources page. Very important. Very important list. A very comprehensive list as well. I checked that out the other day and wow you really nailed all the important voices. Im sure im missing lots. Send me a note and ill add more. Youre website is. Ruhabenjam ruhabenjamin. Resources. We have time for some questions. We have a question up there in the back. Okay. Can you hear me . Yes. Bear with me. So my question is, and this has been a great days chocolate fk information. You talked about, you know, how you went into the field because science and technology was kind of lumped together with this idea of exceptionalism, and in working in diversity and issues of race in academia theres a feeling you have to explain or kind of marginalized because youre talk about yourself. So im getting to a point with, that idea of reverse, the role reversal. I have to tell this story because i have to situate myself to help get this point through. I was the chair of a Search Committee and the black candidate was just as qualified as the white candidate and asianamerican candidate who withdrew. As the chair of the committee i felt my colleagues did not see the way that race was operating in the room and yet so were in a field of Library Information studies and one of my colleagues who i have a personal relationship came up to me and said, can i be honest with you, i dont feel comfortable talking about race, and so thats, thats one of the reasons. Anyway in thinking about this idea of exceptionalism, how is it that if you use reverse logic white men are able to kind of come to this tech sphere and be given all this prestige when they havent done the work of unpacking their own biases. One of the things you said we tend to default to math, so the criteria for the search we identify strengths and weaknesses and he came back afterwards and said i came up with this chart and i put it together, i put the candidates on the chart and this helped me make my decision. His decision had been made but that was the logic. This chart is really going to help us all make this decision. In reality the preferences and biases were made for the candidate that everybody else wanted. Safe to say if we use reverse logic we would be able to say to a white person you have to do your homework who you are. We would never necessarily say that in order to question your White Privilege, see how you got to this point create facebook, mark zuk, this year off from college, a gap year to question your own privilege. Its a good example. Its an example of an algorithm that has biases and values embedded in it. Maybe a better approach is for you to come in and be like well i did my own chart and this is what i came up with and in that way show how are you prior assumptions about who was the good candidate shaped what you valued, what weighted more or not and that would be a twie reflect back. Im not saying you should have done that but in this scenario rather than just say your thing is biassed to actually perform it by showing i can create a whole different algorithm so my candidate will come out on top because of the characteristics i value and how i weighted it. Its just a thought process. Searches are fraught. I think maybe it would be fun if we provoked people by saying hey you need to examine your White Privilege before you build technology. One thing i would also recommend in terms of vocabulary for these kinds of conversations is a book called damn lies and statistics. Theres no other sequel called more damn lies and statistics. So a lot of rhetoric around overusing is supposedly neutral statistical frame works for statistics or frame works or what have you is the language of calculation which is distancing as you said. So if you learn the lingo, then you can off terrify same lingo as a counter. Right . So, if you are familiar with the availability, you can throw that out in these conversation, and i dont know its a useful way to counter the distancing. I would say on. That im all for empowering people with the language and the tools that you are, the terrain on which youre fighting but i also think that, you know, theres often call for people to gain this technological literacy and not the corollary. For people to gain a basic level of social and racial experience so the challenge is not to become better to play the game as its been. Set up but to question why is it i have to develop this language and these tools and you get to be ignorant on x, y and z and still call yourself an expert, right . I think there needs to be a related demand for people to develop a basic sense of racial and social literacy before they start getting their hands dirty, developing things that are going create all kinds of havoc in peoples lives because of their handiwork. Absolutely. Do you feel like Computer Science education has enough socioligical literacy embedded in this . No. I think theres some programs that are better than others. Theres a growing sort of demand for it, sort of infusing ethics but goes back to a point you made earlier in term of the way ethics is operationalized. I am ambivalent based on my, you know, understanding of how bioethics as a field has grownup, as kind of a hand ratr than a challenge to a real site of questioning and engagement and a field that often clears the way and makes consent possible without empowering communities for informed refusal. Thinking about the way other ethical fields developed in connection with a s. T. E. M. Field, we have reason to be wary out robo ethics or techno ethics being developed solely to enable Technology Without allowing community and people to have a critical say in refusing types of technologies such as facial recognition and others. Nelson says we need to open up a space to say no to technology, which i think is really important. To your point about ethics, when we think about ethics and historical tradition of ethics, we have to think about which field does it come from . It comes from the field of philosophy. Philosopher own the domainofethics. We have to look at flossters and think abo philosophers and think about who is in the field. Its more than the s. T. E. M. Field. Exactly. I think we need to open up the conversation to the social sciences and humanities as well. It needs to be truly an interdisciplinary conversation and we need to do less policing of boundaries. I think we have time for one more question. Right over there. Thank you both so much for what you brought to us today. I really appreciated it and hope its given me a lot to think about going forward. Thinking about the future of the intersection of science and race, i begin to think about two things. Toni morrisons statement of the various serious function of racism is distraction, but im also thinking of Dorothy Roberts consideration of biosocial plasticity every time we take a step forward in these intersections, that gives new ways for excuses for biological and scientific determinism. So my question is how do we confront these issues, anticipate them as well, but not reduce ourselves to always being in and around these problems. How do weed not reduce our livelihood and research for being in and around these problems. Thats a great Million Dollar question. Do you have an immediate thought about it . I have some thoughts around it. A set of responses to the Toni Morrison part and the Dorothy Roberts part of it. One of the things that ive sort of come to over the last two years prompting by some senior scholars is so ill give you a very concrete example to the first Toni Morrison point. I finished the edited volume and trying to think about the title for it. The title was technology oppression, coming out hard, you know, with the critical analysis. One of the senior contributors to that volume was like, youre ceding all of the intellectual terrain, the framing on the problem and youre not giving any space to your own imagination of what the good life is, what alternative it is, what world you want to live in. To me, the exchange, in the simple naming of something how we give over to the response, however important, without giving intellectual space to the world we want. Naming it, claiming it, having a word for it. Thats partly why i went back and retitled it thinking what is l liberatory imagination, if we dont reclaim that intellectual space, i do agree that the imagination of an elite few that gets infused into so much of this shows us the power of imagination, the kind of perversion of imagination as an oppressive force, but imagination is something we have to reclaim as ours. How do we want to materialize our own imagination of what the good life, what a just society is. I think in part that is a response to Toni Morrison. Ive heard her talk over the last few years, even in our writing, writing more about what is joy . What is happiness . Having stories about that in addition to the stories of the pain and the violence and the hardship, so its finding that balance. I have some other thoughts around the elasticity, the kind of Dorothy Roberts point, which i think is important. It just pushes us to really understand the limitation of the facts or the data around something, around even our project of being teachers. What is data good for but what isnt it good for . You know, ill leave you with this last reflection. That has to do with colleagues who out at stanford who wrote a great set of papers about how just presenting people with data about Racial Disparities in the criminal Justice System actually doesnt lead them to want to support reforms in the system. Specifically white america. They did a study in new york showing america about disparities and asking white americans, the respondents, so now are you willing to support reforms against three strikes law and stop and frisk . The more they were exposed to the knowledge around this, the more likely they were to support the reforms, which tells us something is happening between, you know, their eyes and the data, and that is all of the stories, the narratives, interpretative frames, oh, if theres more black people behind bars, they must deserve to be there. They must be more criminal. Confirms all the racial narratives, so that tells us we have to be as rigorous around the narratives and the stories and challenging that as a terrain of struggle as we are about producing better data, because the data is not going to save us. Again, its thinking about where we invest our energies. Its also just a challenge to this idea that more facts and data about inequality, it has kind of a perverse quality. Who are we trying to convince . They already are not invested in our humanity, why are we expending our intellectual energy into more stuff thats not making a dent. I think we have to examine cha we think shifts society, what we think social change comes from and pour more of our energies into those life affirming projects rather than simply responding to what is. Well put. [ applause ] and i think that is a wonderful note to end on so we can all feel inspired and leave the day with your voice i thinking in our ears. Thank you so much. Thank you, meredith broussard. Thank you all. [ applause ] thank you both. Please buy both of their books. Buy two copies if you need to, or more. I want to finish out the day with a poll of sorts, because i was told someone in the room posted something on twitter that i want to confirm by your voting. How many of you have been to a technologyrelated conference where all the speakers were people of color . Before today. Before today. And the same where all of those i should say the majority of those speakers were women . Okay. All right. Well, this is, of course, part of the premise of the center for critical race and digital studies, which is to say what if the conversations we had today were at the center of our understanding, our development, how we think about Public Policy and Public Interest of technology, what if the questions, concerns about racial Group Interest and damages and and joy and all of these prerogatives were the lens through which we made our decisions about how technology would inform and determine in some respect our future. So that is the promise of this group. To finish out the evening, which well do upstairs with a reception that i hope youll be able to join us for, we will give you an opportunity to continue to engage with this work with the production of what we have called crds syllabus. Racial quo and lori lopez will say more about that during the reception. For now thank you. Thank you to all of you who watched on cspan, the live stream. Thanks for all who came out to join us today. Thanks to all the rds affiliates who put together programs and lent us Brilliant Minds and ideas. Thank you, and lets go have a drink. [ applause ] coming up here on cspan3, the National Commission on military, national and Public Service holds a hearing. After that well show you a forum on u. S. Foreign policy in africa. A little bit later, a panel looks at ways to reduce childhood poverty. The reviews are in for cspans the president s book. It topped New York Times new and noteworthy column. Kirk us reviews calls eight milestone in ever changing president s. New york journal of books, the president s makes a fast and engrossing read. From George Washington to barack obama. Explore the life events that shaped our leaders, challenges they faced, and the legacies they have left behind. Cspans the president s is now available as a hard cover or ebook today at cspan. Org thepresident s or whenever books are sold. Coming up in about 30 minutes, senator mark warner, vice chair of the Senate Intelligence committee will talk about chinas use of 5g technology and Artificial Intelligence. Live coverage from the council on Foreign Relations starts at 12 30 eastern on cspan. Later this afternoon health and Human Services secretary axe aczaaz azar will speak at 2 00 p. M. Eastern on cspan. Follow coverage online at cspan. Org or the free cspan radio app. Tonight on the communicators were on capitol hill talking to exhibiters from ces on the hill, gives congress and staffers an advanced look at new t