In his new book christian says if we continue to rely on Artificial Intelligence to solve our problems what happens when ai itself becomes a problem . Computer science and poetry tackling the ethical and technological. His work has caught the attention of people like elon musk who said his book the human human was his night table reading. It questions whether its a replication of an epitope vias and how can we get a handle on technology before you lose control . He argues although with trained these decisions force eventually humans will need humans. Well be discussing a lot in the next hour and i want to ask your questions but if youre watching live with us please put your questions in the text chat on youtube so i can into end of the conversation today at the thank you brian welcome in thank you for joining us. Its why pleasure. Thank you for having me. This is not your first book of course could i want to ask the opening question and its not the type of question you ask an author but that is why did you decide to tackle this title now . A great question. The initial seed for this book came after my first book had come out and as you had mentioned vanity fair reported elon musk i was his bedside reading and i found myself in 2014 attending a Silicon Valley group that was a bunch of investors and entrepreneurs and they had seen a thing about elon musk reading books and they invited him to join and to my surprise he came. There was this really fascinating moment at the end of the dinner when the organizers thanked me and everyone was getting up to go home and elon musk forced everyone to sit back down and he said no, no but seriously what are we going to do about ai . The not letting anyone leave this room until you give me a convincing counterargument about why we should be worried about this or give me an idea. It was quite a memorable vignette. I found myself drawing a blank. I couldnt convince him of why we shouldnt be worried and i was aware the conversation around ai and for some people its like a Human Extinction level risk. The presentday ethical problem. Id didnt have a reason why we shouldnt be worried about it but i didnt have a concrete just in. His question so seriously what is the plan i was finishing my previous book and i really began to see starting around 2015 a Dramatic Movement within the field to actually put some kind of plan together both on the ethical questions and the further into this future safety questions. Both of those movements have grown i would say explosively between 2016 and now and the questions about ethics and safety and what in the book i describe is the alignment problem and how do we make sure the objective that the system is carrying out is in fact what we are intending it for it to do. These things have gone from marginal and somewhat philosophical questions at the margins of the ai to i would say the central question. So i wanted to tell the story and figure out in a way answering elons question, what are we doing . When i was getting into this theres a lot of obviously very complex technology. I dont think a lot of people are aware of how its applied in lifeanddeath situations in our society today. You were talking before the start of the program that you gave a number of examples of ai tools but one of your examples was the algorithms being used by judges not just in california where instead of cash bail a judge will use an algorithm to determine whether or not a suspect is going to be released before trial. Proposition 25 would reaffirm a law to replace it with an algorithmbased system. A very complicated issue and what surprised me when i was looking at it was the naacp and Human Rights Watch affirms this prop position because our inequities of the algorithm. If you could give us an example and build from there. What is the algorithm and how do we get there . Absolutely. Theres a nearly 100 year long history of statistical what were called actuarial methods, this attempt to. A probation and parole. That is i say started in the 20s and 30s but really took off with the rise of personal computers in the 80s and 90s and today its implemented in almost every jurisdiction in the u. S. Municipal county state and federal. There have been an increasing scrutiny that has come along with that but its been very interesting watching the Public Discourse with these tools. For example the news york times was writing to New York TimesEditorial Board was writing in 2015 these open letter saying its time for new york state to join the 21st century. We need something that is the active that is evidencebased and we cant just be relying on the whims of folks behind the bench. We need science. Sharply that position changes and by the end of 2016 the news york times is running a series of articles saying algorithms are putting people in jail algorithms have seeming racial bias. Calling up a name that particular tool which is one of the most widely used tools throughout the united states. The question has really has ignited an entire subfield around this question of what does it mean to say that a tool is fair the system is designed to make projections about someone who will reoffend if they are released on probation or up pending a pretrial. What does that mean to take these concepts that exist in the law for equal opportunity etc. And protections as senator what does it mean to turn this and how do we look at a tool like this and say whether we feel comfortable. You actually give us some examples of how a black suspect in a white suspect the similar crimes in different background and how much more likely the white suspect was to go free including one of [inaudible] so what goes into the baking of that cake with builtin biases because the biases were not intentionally baked into it via they are still hard baked in those sometimes. Its a very good conversation. One place to start is to look at the data that goes into this. One of the things that these systems are trying to do is predict one of three things. Lets just think about the pretrial case for now. To the glee at tool is predicting three Different Things but one is your likelihood to not make your court appointment. The second is to commit a nonviolent crime while you are pending trial and the third is to commit a violent crime. The question is where are these models trained and if you look at Something Like if you fail to appear in court the court knows about it a definition. That is going to be certainly unbiased in that a sense regarding who you are predict you dont show up. If you look at Something Like nonviolent crime its the case that for example if you pull young white men and black men in manhattan about their selfreported or one the usagebased self report that they use marijuana and yet if you look at the arrest of data the black person is 15 times more likely to be arrested for using where one of them a white person. In other jurisdictions that might be a times as likely and it varies from place to place. Thats the a case where its really important to remember that the model claims to be able to predict crime but what it actually predicting his rearrest sarao rest is kind of an imperfect and systematically so proxy for what we really care about which is crime. Its ironic to me how as part of this project in researching the system i went back into the historical literature when they first out of getting used which was in illinois in the 1930s. At the time a lot of the objections were coming from the political right. Ironically i made the same argument that progressives are making a bid from their site so conservatives in the late 30s were saying now wait a minute, the bad guy is able to evade arrest in the system doesnt know that he committed a crime in the system treats them like he is innocent and will recommend this release and recommend the release of other people like him now we hear it being made from the left which is to say someone is wrongfully arrested and wrongfully convicted they go into the data is a bad person a criminal and will recommend detention. This is the same argument just framed endeavor ways. Thats a very real problem and we are starting to see groups like for example of a partnership on ai which is a nonprofit Industry Coalition of Facebook Google and a number of groups with 100 different stakeholders. Recommending that we dont have predictions of bog violet rearrest. And the second component that i want to finalize its a very vast question but the second thing that is worth highlighting is the question of what do you do with the predictions. So lets say you have got an average chance that youre going to fail to make your court, scheduled court appointment. Well for prediction. Theres a separate question which is what we do at with that information. One thing you can do that information is to put this person in jail while they wait for the trial. Thats one answer. Turns out there is an emerging body of research that shows things like if you send them a text message reminder they are much more likely to show up for their clients appointment and their people proposing Solutions Like providing Daycare Services for their kids were providing them with transportation to the court. Theres this whole separate question which is as much that is going on as much scrutiny that is rightfully being directed at the actual algorithm in the prediction is a much more systemic question which is what we do with this prediction . And if you are a judge and this person failed to appear you would want to recommend some kind of text message alert for them as opposed to jail but that may or may not he available in that jurisdiction so you have to have what you have and thats a systemic robin. Its not an algorithm. Se but the algorithm is caught in the middle if you will. Would the get out of the crime and punishment area and you talk later in the look about hiring and amazon has come up with it. What they were finding was there were a lot of men and the reasons like this were the way the system was being trained and used and when you get to the question at the end why were you trying to find people but tell us about that and what were they. It involves amazon around the year 2017. They like Many Companies were trying to design to take a workload off of humans. If you have an open up position you get a number of residents coming in i d lee youd like some kind of system to triage. In a somewhat cute or ironic twist amazon decided they wanted to rate applicants on this tale of one to five stars so reading prospective employees the same way amazon reads customer products but to do that they were using that type of confrontational language model called word benders and without getting too technical for people who are from here with the rise of the network these Narrow Network bottles that were successful around 2012 started to move into confrontational linguistics. In particular there was this very remarkable family of models that were able to who is an agent words so if you have documents he could forget the missing word based on the other word that was nearby that is abstract with the 300 dimensional space but these models have a lot of other cool properties. Its an arithmetic with words. King minus man plus woman and search for the point in space that was nearest to that and to the queen. You could do tokyo minus japan plus england and these sorts of things. Numerical representations of words ended up being useful for this surprisingly vast array and one of these was trying to figure out the quote unquote relevance of a given job and so when we could do it is to say here are all the resumes of the people we have hired of the years. Throw those into this word model and for any new resume lets just see which of the words has the kind of positive attributes and which have the negative attributes. Okay so it sounds good enough but when they started looking at this they found all sorts of bias. The word women was assigned a penalty. If you are going to a Womens College then the word women was getting a negative deduction. Getting a negative rating because its farther away from the more successful training. It doesnt appear on the typical resume that did get selected in the past and similar to other words that often do. Of course the red flag goes off and maybe we can delete this attribute from our model. They start noticing that womens sports and field hockey for example they get rid of better Womens Colleges so they get rid of that. Then they start noticing its picking up on all of these various syntactical choices that were more typical of male engineering and female so for example the word like executed capture. I executed a market value in certain phrases that were more typical of men. At that point they basically gave up and scratched the project entirely. In the book i compare it to something that happened with the boston Symphony Orchestra in the 1950s where they were trying to make the orchestra which had been mailed dominated a little bit more equitable so they decided to hold the audition behind a wooden screen. But when someone found out later as the audition or walked out onto the wooden parquet floor of course they could identify so its not until when they additionally instructed people could remove their shoes before entering the room that they finally started to see the gender balance and the orchestra started to balance out to that the problem with these language models is they basically fear of issues. They are detecting the word executed in the team in this case just gave up and said we dont feel comfortable using this technology. Whatever its going to do its going to identify some very subtle pattern and the engineering resumes we have had before the gender balance thought there so its just going to sort of replicate that into the future which is not what we want. In this particular case it just walked away but its a very active area of research. How do youd need bias the language model . You try to identify subspaces within the threedimensional stereotypes and can need delete those dimensions . This is an active area of this kind of ongoing. s how much did amazon spend developing that . Its a great question. They are pretty tightlipped about it. Most of it comes from a reuters article. I wasnt able to do a lot of followup but as i understand they not only disbanded the product but they disbanded the team that made it so they really washed their hands. Im asking because i assume millions were put into that. They could have hired an extra h. R. Person or two. Another example i wanted to get into from a different angle is the selfdriving car and you talk in the book about the fatality that happened because of the way the car was recognizing this person. Again explain that. This was the death of Elaine Hertzberg in tempe arizona. The first pedestrian killed by a selfdriving car. It was an r d uber vehicle and the full national highway Transportation Safety board review came out at the end of last year so fortunately i was able to get some of the in the vote. It was very illuminating to read the kind of official breakdown of everything that went wrong. Was one of these things where probably six or seven separate things went wrong. It only been after the entire fleet of things that had gone wrong it might have been different. One of the things that was happening was using the network to do object detection but it had never been given an example of a jaywalker so in all of the Training Data that had been the model have been trained on people walking across the street were perfectly correlated with zebra stripes and they were perfectly correlated with an intersection and soap were so the model didnt really know what it was seeing when it just thought this woman crossing the street and most object Recognition Systems are taught to classify things into exactly one of the discrete number of categories. They dont know how to classify stuff into a category they dont know how to classify something in any category but this is again one active research that the field is making headway on only recently but this particular case the woman was walking a bicycle and so this sets the object recognition system ended his fluttering state words first functions like a cyclist but but she wasnt moving like a cyclist but shes a pedestrian but then it saw the bicycle and maybe it is an object that rolling in the road. No i think as a person, no i think its a biker. Due to quirk in the way the system was built every time it changes mind about what type of entity was seeing it would reset the motion prediction so its constantly predicting this is how a typical prediction would person with the vendor cyclist etc. And as a result where they will be in a couple of seconds right now. Every time it changed its mind and started recomputing that prediction. So it never stabilized on a prediction. So there were additional things here with overrides at the uber team had made. In 2018 cars have this rudimentary form of selfdriving to automatically break or swerve and to override that and sort of add their own system in interactive and ways but i think the object recognition thing itself is for me very schematic and theres a question of certainty and confidence that when it says im 99 sure that its a person or whatever might the how do we know if those probabilities are well calibrated and how does the system know what to do with them. I think many people within the uncertainty community now would argue that the mere fact that you are changing your mind should be a huge red flag to slow the car down. That alone and that wasnt done. So its very heartbreaking to think about how all of these engineering decisions add up to this event that would have been so much better to have avoided. The Silver Lining is that there are lessons that we take to heart not just an industry but also in academia thing we really need to get to the bottom of this question of certainty and uncertainty because i think thats a very human thing. You dont want this to exist in the medical literature and you dont want to see in a reversible action. You see it in the law with things like with a preemptive judgment and im forgetting to term but a judge may issue an order in advance of what the real thing would be because they are trying to prevent irreparable harm. There is a question for the machine writing committee which is how do we not want to make any reversible choice in the face of uncertainty or a highimpact situation. That requires us to quantify impact and quantify and certainty and have a plan on what to do when we find yourselves there. All those pieces need to come together and we are seeing progress being made on all those fronts. It cant happen soon enough to. In these examples we talk about and others in the book are the culprits the same instant the same general problem and then im going to ask if you think its being addressed. I think there is one broad problem in the field its known as the alignment problem and thats in the books title which is how do we make sure that the objective in the system is exactly that which we want the system to do in all of the examples that we have highlighted so far have shown us cases where one must be very careful to think about how to translate the human intention into it natural him machine intention could we think we can measure reoffense but we cant. We can only measure rearrests. We think we can higher candidates but we can only hire candidates that represent superficial previous candidates. We think we can classify objects in different categories but many objects have more than one category or we dont always know what category to put the men and knowing that we dont know. So all of these things and there are many other manifestations as well speak to the fundamental issue of alignment but the actual mechanics are different. Sometimes theres a problem with the Training Data. Sometimes theres a problem with the model market pictures so one problem we have touched on yet is this black box issue of the interpreter ability, explained ability and how do we know whats going on inside the model and how do we know how to trust the model and reverse engineer what generated that outputs . There are questions of whether the objective functions of the system, what is the quantity we are trying to minimize and maximize and how do we find that . Each component of the system has its own manifestation of the alignment problem in each of them and your second question ended up being addressed and that for me is really the striking thing that makes where we are now so different from where we were when elon musk cornered me and a bunch of executives in the room and no one had any particularly good ideas but i think we are seeing absolutely remarkable shift in the field and even just, i talked to one researcher who said 20 went to the industry conference in 2016 he said he worked on ai safety curt people raise their eyebrows at him like its a little bit kooky. When he came back a year later in 2017 there was an entire daylong workshop on ai. It an absolute numbers the amount of people working on this is still quite small but even over that short time to my mind its astonishing and i think it cant come soon enough. I encourage all motivated undergrads and High School Students to get excited about it or theres a lot of work to be done. And talking about learning and development thats going on on the aei and the ai research and Development Field is the actual commercialization of the technology ahead of where it should be and should this also be modeled in not put on our roads . Thats a great question. In some ways the criminal justice stuff as i say is an 85 year history at this point that we are still playing catchup in terms of the analysis relative to the deployment. You can think of is sort of as a race. Can the understanding catch up to the actual implementation and i think we have seen that with social media. There were some decisions that facebook made about how to run their new algorithm and it went from technical to supervisor learning to reinforcement learning but basically the narrowminded focus on always prior ties the content that will get the most on that thing created a situation where extreme content was being promoted and people were burning out and leaving the platform in addition to many other societal externalities that created. They were able to replace it with a more dance model that factored in maybe that could burn someone out or it would distract etc. You could note the point of that model was to maintain user retention and these things that were considered bottom line or they think there really is a question of when you think about the alignment problem is the system doing what we want, think when you look at an actual industry theres the medical which is what is it that we love the system to be doing and could we be put away in that sense as well. I think those questions are going to loom ever larger. We have seen in general media wants this topic does come up in one way or another there are more people saying we need to be thinking about this. As you said one of our audience members asked about china and the widespread use of facial recognition. In the book you do talk about facial recognition and the inappropriately funny results were insulting and obviously said in one of the cases but could you talk a bit about facial recognition . In california think it became a proposition on whether or not to use these technologies so tell us a bit about that. Yeah so this is coming through the legal system now and if i remember correctly the first case in minnesota with someone being arrested and being incorrectly identified in facial recognition tape. A lot of it is going to the court system and probably heading to the Supreme Court put on the technical side yeah theres this really unfortunate thing kind of hard to ignore pattern of ethnic minorities being incorrectly recognized or categorized etc. By these recognition. And one of the famous examples was the Software Developer in 2015. A group of photographs he had taken of himself caption by Google Photos as gorillas and another example is that an m. I. T. Researcher when she was an undergraduate peer scientist doing these facial recognition homework assignments she had to borrow her roommate in order to check to make sure the system works because it did work on her. It worked only when she and this really set off an investigation of why does this keep happening . Whats the underlying thing and there are a couple of different components to but i think one of the main ones is there had been a preexisting lackadaisical attitude for how these databases were put together in the first place. Part of what led to the rise of computer recognition was the internet and suddenly if you needed half a million examples to train your system in the 80s we were totally out of luck but now that we have the internet and Google Images you can just download a million things im the question is what are you downloading . The most popular were Research Database was one developed called labeled faces in the wild than they thought okay what we want to do is understand by two bases are in fact the same person so they have the clever idea of. Newspaper images because they were labeled with this person and this person and this person. That way well have this giant database and we can do these. The problem was you were at the mercy of who was on front page articles and the answer was george w. Bush. In fact an analysis of labeled faces that was done a few years ago showed there were twice as many pictures of george w. Bush in the database is all black women combined. Which is just insane. If youre trying to build something to be fair to the people who collected that data this was kind of the next m research project. This was not intended to be used in any actual system but these have a way of sticking around if someone downloads it off the internet or the most exciting thing its very striking if you look at the original papers and i dont want to single them out because of the widespread. The word diversity is getting used in this early 2010 to mean mean so theyll say this is the most diverse dataset assembled to date. To bring more focus and that the Training Data and also the representation and less than 1 percent of Computer Science so theres a lot more that could be done in the field itself to address the question and then ai that has a number of initiatives including scholarship and grants. And trying to equalize that. There is a question from the audience of maledominated answers. But in the book when you just get one discuss word embedding with gender identification to gender neutral mentioning the work the researchers did. That would seem to be a requirement because of all the different interpretations and understanding the social science that is mixed in with the Computer Science. That is absolutely right and thats where the field finds itself at the moment no longer can Data Scientist think of themselves purely doing mathematics that we have just gotten to a point that these are enmeshed of how is the Data Collected or generated and the question the human respondent how is it worded . And will population people where you sampling from . With a representative or not of other groups that could respond to the same thing . So we are very much in this moment interdisciplinary work. Between the Computer Science and social scientist lawyers and cognitive scientist there is interesting doing work at the intersection of ai and infant cognition with that by is that small kids have been to figure how it works in the with that ai system to solve this problem so there are many fronts on which this capacity is uniquely positioned at the interface we see many more papers and that is very encouraging. I suspect dave and diagram but that one but did that help you . And the program that we have in common excessive scrutiny but my background is also philosophy that when i was a student i was interested in the question and then to take an angle on that question and ai answers that in a different way to actually make it. For 2500 years of western philosophy what makes us distinct and aristotle has answer that question by comparing ourselves to animals theres never been a more interesting time because we have a completely new standard with a totally different set of answers and then to decide that analytical reasoning of what it means to be human. And the seat of the Human Experience and social ties and teamwork and collaboration et cetera. So for me in some ways i feel very lucky to have this eclectic set of interest and happen and to be alive the two disciplines are on the absolute collision course. Do believe the humanlike machines . With the singularity something that is called the hard take off where we are improving. I dont really see that as it is more uncanny as the intelligent thing to do there isnt a sharp elbow something that happens overnight for my perspective it is inevitable. With Computer Science if you ever think a machine can think . Of course and with that secular worldview then you think computers are made of atoms it is all the emerging behavior of complexity and we are on the roadmap so open ai that has 175 billion parameters. With 110,000 complexity it doesnt sound very impressive zero. 1 percent of the average model size is doubling every three months so if you do the math that we should expect models that have that complexity of the human brain in the spring of 2023 thats not very far away. But those feel a little scifi dont know the answers but this is riveting. The story to communicate with each other spontaneously and using whatever else they had available so the need that we have for a guy and to align with that we want to do and then to solve the problem in a completely different way. And without Artificial Intelligence that is much more advanced to deliver that is more in alignment with what we want to see but the way to reach us is totally alien. Thats a great question. I think both of those possibilities sometimes what is forgotten with ai there is that inevitability. There are real choices of the architecture of these systems. Thats already the case to have the giant problem of the Neural Network then you have no idea whats going on. There is an increasing science to figure out whats going on in the network but also how do you constrain the network in certain ways . So the system is naturally divided into the subcomponents to send what this is doing. There is a lot of encouraging results so to your question but thats not necessarily the only way. So we will have more agency there to build the systems that we feel leak we can trust. Talking Artificial Intelligence. But then to watch the question that i get in 2011 or 2012 people are asking is ai coming for my job . By 2015 they were asking will it destroy all of humanity as we know it . And within the Research Community the cautionary tale has shifted from one that feels more like that obedience system its trying to be helpful but it doesnt quite know what you wanted to do there is a thought experiment thats called the paperclip maximizer. The paperclip factory and you say it is so good it turns the universe into paperclips so that is a caricature but what people are worried about is not a system going rogue or to be exterminated but trying to do what it thinks we want it to do but realizing to be specific enough so part of the problem of solving that alignment problem means feeling comfortable with that every specific detail and then that we can save hold on and that is the kind of thing i think that makes myself a little more relaxed than we were three or four years ago. A former computer engineer they went to be a systems analyst talking about how the field is more interdisciplinary today like i want to go into the field . Or switching over from ten years ago . Thats an interesting question. The question and the head of a i Tesla Software 2. 0 one. Oh was programming but now where we are with Machine Learning you dont write code that you say do Something Like that. There is a debate if there is software three. Zero that you have to get the Training Data and it sort of in that category so its almost like you are writing the essay prompt designed to fill in the blank. You can use it to do all sorts of things you can see the following event against acts but word it means starts to feel a lot more like how to work with another person so the tone that you want or the style that you want, et cetera . So i think there will be a new category of people who are not exactly Machine Learning people but wrangling these giant model models. And thats a new job that doesnt really exist with an interesting set of skills the words a user very important and with intuition of how the large models are trained. More broadly with the questions that Machine Learning is dealing with and becomes more and more human it does invite a certain kind of person and thats where we are starting to see and that is a shift just beginning. You are you hoping to reach in this book . Is that a general audience . The ai community . Are those companies trying to use the technology . A couple of answers. One is a general public. Relative to other fields in science ai Machine Learning that a lot of people are aware of them if they understand those underlying issues zone to try to raise the level of the debate and with some of that basic vocabulary we can be comfortable talking there is a huge class serve people that never seem to require them to know about Machine Learning now in 2020 year handed these algorithmic formulas now you have machine predictions. Theres a lot of people out there that need some familiarity in this area. And lastly i hope they can grow the field and this is one of the most exciting and most important things happening with science. If i could get undergrads excited about this area or give me a cool project on safety. Lets get started. For me, that will feel really good to bring even more focused and to the movement. We have time for one more question. It is a time travel question. What word you expect to see the state of ai with research and direction . 20 years is interesting. Starting in 1955 ai was always 20 years away. And still is. But realistically, there will be a generational replacement. The way we think of that digital native or social media native they will grow up in a world if they become ill legal how can you justify a human and driving a car . And they will come to understand themselves inhabiting a world all these different systems different degrees of intelligence, agencys with the interface that increasingly look the way people talk to each other theres no problem talking to alexa for example they find it fairly normal that there is a system that you can chat which is remarkable even if you think ten years ago that was not something anyone was familiar with. So the boundary will start to get blurry between the technical skill set of navigating the world of other humans and then to speak the same language and then the item you think its reaching for but then you say no the other one for you communicate and words and there is a new generation thats how the world works. And hopefully we will set them up to have a reasonably good world at that point. Author of the book the Alignment Program thank you for watching and participating online to support the Commonwealth Club efforts thank you im wishing you a good day and stay safe and healthy. One of the things to recognize they are very different to operate with open access the means almost anybody can join the platform but for the most part these are easy the apps to get on again as chronic excess supply and thats what we see right now which is tons of people streaming onto the apps getting harder and harder for people to make money so then more people go on to apps and you can see that on the data with the general unemployment to see the entry and the exit very very flexible. With the rising number of dependent workers with delivery. But the flexibility of the attracts a lot of people for whom that flexibility is essential those that have other responsibilities that we interviewed because one woman man got a divorce and had to take care of her children so we started working on multiple platforms and Catherine Hill has done interesting work on the use of the apps because many kinds of disabilities will know what it will be like even our to our so this is positive with that flexibility that gets us to the last point that the Technology Eliminates a lot of management functions so hr or Quality Control matching consumers many of these things are done by algorithms and then you know that these companies in the early days didnt hire many people they were very very lean and it was a big complaint but the point is the functions of management are now automated. And that means workers dont need that much managerial supervision and this becomes an argument for why the cooperative structure is so much more efficient and these kinds of platforms they also scale to get the right technology you can build out the workers coop with the right rules and governance you can build that quickly. The workers coop is the efficient way to organize platforms and thats why our last case was the Workers Cooperative a very successful case to govern the platform they got the money back from the photographs they sold and theyre happy about the whole enterprise so one possible future for the gig economy and then curd really make it work for workers that does take advantage of what this technology has to offer. Examines the relationship between law epidemics and Public Health guidelines. He is interviewed by georgetown law professor lawrence. Afterwards is a weekly Interview Program with guest hosts interviewing topnotch fiction authors about their latest work