comparemela.com

Political advertisements. Georgetowns Law Institute hosted this forum. Okay. So were reconvening for our fourth and final panel of the day. New challenges in election oversight and regulation. For those who didnt see me at the top of the day, im alexangra gibbons. I run ouw institute for Technology Law and policy here at the law school and were really thrilled to be hosting you all today. The conversation so far have surfaced many key areas of challenges. Somebody was saying that theyre feeling a little depressed and we need to try to get the energy up on the final panel of the day. Issues from the fragmentation of political discourse and new pathways for misinformation to voter suppression, to the technical challenges of election security. This panel is going to add another challenge the significant challenges we face in election oversight and preparedness. But also talk about the solution space as well which is quite simply, what do we do . We have another phenomenal lineup to help answer these questions today. Sitting immediately to my left is the honorable Ellen Weintraub who is the federal election commissioner. She has served as a commissioner on the fec since 2002 and previous worked in the Political Law Group and is a counsel to the house ethics committee. Next to her is patrick day. He previously served as the longtime staffer in the senate where we were colleagues working as the National Security council for senator dianne feinstein. The former ambassador to the organization for Economic Cooperation and delegate is at the german marshal fund. And finally at the end is Mark Lawrence appelbaum, a georgetown law graduate who completed a project at the legal center on foreign election interference and on line disinformation threats in u. S. Elections. Were going to follow the same procedures that we have in previous panels. Commissioner weintraub is going to begin and we will move down. Thank you for inviting me. Asked to speak about nww challenges in ten minutes or less is a bit of a challenge as well. Im going to hit on a bit of the challenges that im personally confronting at the fec and then move into a discussion of the article that ive submitted to the symposium issue along with my coauthor, tom moore, a proud georgetown law grad. So thank you for helping me get all that together. Without his efforts, georgetown would not be in possession of a draft today. Okay, so number one challenge for me in election regulation is that we do not have a quorum to make decisions at the fec right now. Were supposed to have six commissioners and were down to three. We lost one three years ago, we lost another one two years ago, and we lost a third about just over five months ago and why none of those positions have been filled, you would have to ask the president and the senate because theyre in charge of that. But that is that is huge and it means that we cannot launch any investigations. We cant conclude any investigations. We cant do any rule making and we cant issue any advisory opinions. So thats a bit of a problem. Although, honestly, the second challenge that i confront and have been for quite some time, even when had a quorum, it was difficult to get anything done because the commission has been for some years now extremely ideologically divided. Polarization a big problem in washington. You can imagine, youre going to have a problem with polarization. The commissioners on the republican side and the commissioners on the democratic side have very different views on whether any regulation of money and politics is indeed advisable. One example of that is a rulemaking that has been ongoing, believe it or not, since 2011, just to clarify the rules for disclaimers on Internet Political advertising and we were really pretty much at an impasse and i wasnt getting much engagement from the other side for some period of time before we lost the quorum. The commission lasted a comprehensivive look at the internet and politics on the internet in 2006, 2007. That has got to be about a century in internet years ago. And there are large areas that are unregulated that really need another look. We saw recently a case where a superpac and the Hillary Clinton campaign were alleged to have coordinated through a bunch of communications over the internet and their argument was, well, theres an exception for communications over the internet except for paid advertising on another persons website, that wasnt this so, therefore, we could do all sorts of stuff as long as the end result was a communication on the internet. Our office of general counsel and two commissioners disagreed with that, but i think that just interestingly, enough, although it was democratic responded, it was the democratic commissioners who wanted to proceed and the republican commissioners who blocked the investigation. That was a problem. We seen the internet used as a way of sending both very open messages, candidates posting b roll on their websites in order to have superpacs pick it up, even though theyre not supposed to be coordinating. As well as subterfuge, we had coded messages tweeted out and a debate about whether that constituted public information. So weve had what i described as a digital needle in a virtual haystack. So we have had a number of challenges at the fec, as i said, even before we lost the quorum and congress is similarly having problems getting anything done also due to polarization. Its very frustrating to me that the honest ads act hasnt passed, which would bring Internet Political ads under the same framework as broadcast ads, and i would also love to see the Congress Pass the deter act or Something Like that, bipartisan proposals to address foreign interference in our elections by imposing strong sanctions on anyone who would try it. I dont know why we cant get commonsense rules like that passed. Why is all of this important, what goes on on the internet in politics . Onethird of americans rate the internet as the most helpful source of information on the 2016 president ial election. This was according to a pew poll. Digital political advertising increased 260 between 2014 and 2018 and is projected for 2020 to reach 2. 8 billion. So this is not a small venue. And it is there are, as ive said, large areas of it that are completely unregulated right now. For this symposium, we decided to look not in the federal but in the federal Communications Act at section 230. Whats been described as the 26 words that created the internet no provider or user of an Interactive Computer Service shall be treated as the publisher or speaker of any information provided by another content provider. Theres this expansive area of exemption from liability for all the internet providers that was created in 1996. So if our internet regulations from 2007 are out of date, imagine looking at something that was written in 1996 when they specifically said the sponsors, said that they were trying to protect this baby industry. They didnt want to strangle it in its crib, they wanted to allow it to grow. And grow it has. In the last in the Third Quarter of 2019, amazon had income of 70 billion. Google had income of 40. 3 billion and facebook had income of a mere 17. 65 billion. That was just in one quarter. I dont know that we can fairly say that theyre still babies in their cribs that we need to caudal and protect. There was a lot of excitement about the internet as a source of political information that could be low cost and would allow upstart candidates to avoid the big money race and get their message out. What weve seen is theres a real dark side to politics on the on the internet. Internet companies, microtarget political advertising which i can talk about later. They create filter bubbles. They create an atmosphere where counterspeak cant really emerge. So you know you get very narrowly targeted ads directed just at you, somebody else who might have different ideas doesnt see the same set of ads. So they dont know that they might to provide you with information to counter those arguments. The internet can amplify political misinformation and disinformation. We saw this coming from russia in the 2016 election but what is kind of scary is that now were seeing domestic actors that are mimicking some of these soviet style disinformation campaigns and while i can imagine various ways of going at foreign interference its harder when its coming from domestic sources. The platforms are failing to adequately protect against foreign interference in our elections so at least they have gotten beyond the point at 2016 where they were getting money for ads in rubles and it didnt occur to them maybe that was a problem. They caught that now theyre not doing that anymore. The algorithms are designed to promote at all costs staying on the platform and they found that the best way of doing that is to keep people riled up. So the platforms i think are playing this really negative role in our civil discourse which is becoming frankly pretty uncivil. And they have a serious problem with inauthentic accusers and bots and so we decided to follow the money on the way advertising is going on the internet. The reason that they are making all this money and so effective is, you know, they suck all of your personal data out of everything you do online. I mean, its really pretty scary. Reading things like your flashlight is sending location data out and is and companies are marketing that. Who would even think that your flashlight was collecting data on you . But the but because of section 230 theres this broad immunity because the platforms are not seen as the originators of the content and one judge said that the way that the platforms are packaging the information is something akin to a content provider that hasnt won over thats an outlier decision with that one judge. The panel went the other way. But i do think that is it shows that, you know, people are thinking about it differently. The platforms are not operating like the phone company where theyre just transmitting information blindly through the pipes. They are taking an active role in packaging and selecting what youre going to see. You know, personally, i would not be averse to seeing inroads on 230 on those grounds. However, we can get at it at a different way by simply making them pay for the information that theyre stealing from all of us. A platform that was, you know, started in a dorm room and with the philosophy of move fast and break things seems to have when they now that they have broken things their approach seems to be more like go slow and dont clean up your messes. So i think we need on the creative about how we go after these problems and one way would be to impose that on the front and end and rebalance and reallocate the costs where they believe. I think this information does belong to us and if the platforms were forced to pay for the data that theyre taking from us that would create different kinds of incentives, maybe they wouldnt have so much targeted advertising. And in any event, you know, we could reclaim something that belongs to us. So we are we think that this kind of follow the money approach might work and in addition what were proposing is that there be kind of a surcharge of 5 which could which would describe as a democracy dividend that could be used for things like fund public media, Public Campaign financing an im told thats the end of the presentation and ill stop there but im happy to answer any questions on it. Thank you. Patrick . Alex, could i borrow that clicker good afternoon, im patrick dehay. Im a senior policy commissioner at cloud fair. I want to say thank you to georgetown and alex for holding this important and timely discuss. Though before i start to talk about voter privacy i would be remiss if i didnt mention two programs that cloud fair is offering to state and local governments in terms of protecting their Web Properties from service attacks and firewall protections. There is now 174 donations associated with state and local governments in 26 states that are using the Free Security services under the program. We Just Launched a similar one for campaigns. I actually have nothing to do with either of those programs. They predate my time at the company but im proud to be associated with them and if you have questions about them or want more information, please let me know. In 2018 as alex mentioned i was the National Security council for senator feinstein. Our committee was in the middle of the inquiry into russian interference in the 2016 election and i was asked to look into the Cambridge Analytica. Im sure everyone is familiar with the footage. This is undercover footage from the Cambridge Analytica ceo, he was caught on tape offering to use prostitutes from ukraine and bribery to entrap politicians in the fake election in asia. In nigeria where they had used email hacking to obtain sensitive medical information in order to throw the election. As you might imagine that was an interesting fact pattern for our committee at the time. But actually over time, as we started to look into their activities i became much more interested in the things that cambridge was doing in the open and what they were doing with voter data in the United States and how they became to occupy the role in the United States politics. So in context, they worked in 44 raises in 2014 in the United States. They worked in 50 races in the United States in 2016. Including on behalf of two of the Major Party Candidates for president. We could spend quite a bit of time to talk about Cambridge Analytica but i want to talk about three points relevant about regulating the use of data in the election space. Over the course of the investigation, the two questions that i got asked most often were what is psychographic targeting and does it really work . So psychographic targeting is a term developed largely in the commericial sector for Online Advertising. The premise is that using your individual personality traits which are inferred from data gathered about you online about social media like facebook, and measuring your openness, extra version, eroticism, messages targeted at you based on the much more likely to predict your behavior and as a consequence more likely to alter that behavior. And for commercials, the objective is to make you buy pants or shoes or etcetera. It has a different connotation when applied in the electoral context. The three studies i put up on here on the slide were referred to as by engineers at cambridge and i thought they stand for three important principles or things that i wasnt aware of prior to the investigations. The first is that private traits are predictable from your digital footprint. So innocuous facebook activity like liking katy perry or the super bowl or the sneakers youre revealing highly personal information about your calculated by algorithms so they found they can use the facebook likes to accurately predict sexual orientation, religious and political views, personality traits, intelligence, use of addictive substances, parental separation, age and gender. So information you may not have never revealed publicly is now available through the algorithms. The second piece is that computers do a better job of figuring those things out than even humans. With 300 facebook likes they could determine your personality assessment more accurately than your spouse. And the third piece yeah, right. I have no additional comment on whether thats a valid measure or not. The third piece and i think this is where it hits home, you see the term digital mass persuasion. They did a real world study of 3. 5 million people. They found that by using individuals underlying psychological traits to target messages at them they resulted in 40 more clicks and 50 more purchases through the campaign. And ill just read one passage that they put on the front of the report as to why it was important. They said digital mass persuasion could be used to covertly exploit weakables in peoples character and persuade them to take action. And the second point about Cambridge Analytica and the quote is partially obscured by the picture but we know a couple of things about Cambridge Analytica and the russian government. So one we know that the Cambridge Analytica ceo was briefing individuals associated with russian intelligence on their u. S. Voter targeting activities. We know that Cambridge Analytica data including the cycle the psychographic models were accessed from russia and at the same time they were engineering the specific outcome and that theyre doing it all over the world. The second piece which is i think interesting though of a different flavor, 2019 report from oxford, the Computational Research project they found evidence of organized social media manipulation to shape domestic Audience Perception in 70 countries in 2019. Up from 28 in 2017. The third paint about Cambridge Analytica this was mentioned before, they werent alone. So cambridge no longer exists, however, i just took a quick collection of entities im sure there are more. These are groups that either employ former Cambridge Analytica the staff, contracted with Cambridge Analytica while they existed in the United States or provided Similar Services to similar clients. One of the companies on the slide i wont say which reportedly is offering a app that was developed for a u. S. Politician to collect voter data in Eastern Ukraine on behalf of a russian leaning ukrainian president ial candidate. So bonus points if you know which one. Okay. So the policy response. So the detail that i have relayed to you created a pretty compelling reason for senator feinstein to introduce legislation which she did. She introduced the so i havent done powerpoint presentations in a number of years. I was under the impression that was part of this presentation so i put these together and i apologize that the graphics arent illustrating as they did on my monitor. Anyway, the voter privacy act takes the principles that youre familiar with from the gdpr, california data privacy act and overlays them over top of the federal campaign act so if passed individual voters could instruct entities about how they can use their data. So if you dont want the dnc to have your information to target ads at you you can tell them to delete it. Theres one that you cant see on there, credit to professor cohen here at georgetown for her help, but its the right to prohibit targeting you. Google or facebook which has more data on you than any of us will ever be aware of, even if you told a candidate not to keep your data to target ads at you they can carry out the ad targeting platform so this can allow you to opt out through social media. As we started to vet the bill, having a background in fisa and National Security law and not being a First Amendment expert first thing i was told is you have a soro problem. Im sure youre familiar with it, it was a vermont data privacy statute. It implicated the First Amendment right for the speaker to have access to the data as parent of communicating with the audience and the interest in the case werent sufficient to sustain the regulation. So the government said it was intended to help with privacy and doctor private confidentiality and the court found the imposition on the First Amendment right to access data, the governments interests werent sufficient to sustain. So the point is well taken that any attempt to regulate the use of data, the first obstacle is the First Amendment right of speakers to have access to it, as part of communicating with a broader audience. I think thats particularly acute this was in the commercial speech doctrine area. So i will say i will try to wrap up with a positive point. I actually think the silver line of Cambridge Analytica is theres so much more information in the Public Record about how personal data is being used in the electoral context and i actually think given the hurdles you might encounter with a challenge under sorl, about how the voters are being manipulate or coerced in addition to the foreign nations using the techniques to engineer outcomes make a compelling case that regulation in this space could survive such a challenge. And may actually be a National Security imperative. So im happy to answer your questions. I know i covered a lot in a very quick period of time. Theres a lot of nuance in terms of the constitution analysis that im happy to discuss but i appreciate your time and happy to be with you all. Great, thanks so much. Georgetown, alex, julie, thank you so much. This is an interesting panel and day and its amazing that you brought everybody together to talk about these issues. Id love to just comment on the first two panelists but ill give my remarks and talk to them later. I wanted to start by underscoring the urgency and get wonky and go back to the 230 discussion and talk about what we might do in a broader framework. So i dont know if any of you noticed but the bulletin of the atomic scientists recently moved the Doomsday Clock up to 100 seconds to midnight for First Time Ever and the first item in their press release about that was disinformation because they said by reducing trust and corrupting the information eco ecosystem we have undermined democracy and to deal with existential threats. No sooner did they send out the press release than iowa proved them right. Among all of the things that we know about iowa, i dont know if you noticed but Judicial Watch had put out the false accusation about voter fraud before the end of voting on that day and the secretary of state republican secretary of state of iowa debunked it and asked them to take it down. We have no code about how people should handle it so facebook was very responsible. They put a screen up over it so you couldnt share it unless you acknowledged that you knew it was false. So they exploited they played arbitrage with the fact that they all connect, that we look at all of them and that spread among all of the other disinformation. Obviously this isnt just happening in the u. S. We hosted a panel yesterday at and learned that many of the same third party pr firms are happening to a greater degree in kenya, india, brazil. The kenyan expert said she finished her first report about Cambridge Analytica right before the u. S. Election. Because she was so focused on what was happening there. So what happens . So like other people in the room, i have been working on these policy issues since the internet was called the information superhighway. I was in some of the rooms where it happened where the initial framework was put together for the internet. And if you sort of go back to the psychology there, yes it was an infant industry and there was the idea that you should have regulatory breathing space for an infant industry. Its also First Amendment. You really cant have the government regulating First Amendment. And then there was also where were in this really regulatory moment and that it would give voice to the voiceless and power to the powerless. But the deregulatory piece, there was a sense that agencies slow moving they take a long time, it would kill innovation. So the new model, its not that it was completely laissezfaire. We now think of it that way because its called selfregulation, but the idea at the beginning was this multistakeholder process. That companies and Civil Society and other stakeholders would get together and come up with rules and they often did that with the threat of regulation behind them. So they didnt have the threat of liability as you do in many other industries, but they had the threat of regulation and that worked on privacy. It worked on setting up the database on Child Exploitation at ncmec. So there was this model and what i would like to argue is that the multistakeholder model is broken. That the platforms and the agencies have suffocated it. I dont know if you noticed but a bunch of civil rights leaders wrote a letter to Mark Zuckerberg completely frustrated and even angry. Theyd been working on a civil rights audit with facebook and they felt like they werent being dealt with in good faith and that they werent making enough progress. So much for the stakeholders t Civil Society stakeholders working with the platforms. Facebook had agreed to give data to a bunch of researchers to hold them accountable through something that was created called social science one. Press releases put out the night before Mark Zuckerberg testified in the senate saying theyd be transparent and accountant and then a year later there was a letter sent in complete frustration because they hadnt gotten the data theyd been promised. The agencies arent stepping in when they have authority. So we have heard about, you know, users being manipulated, tricked online. Harassment is a feature, not above. The news media is being completely undermined so i would argue the multistakeholder model is completely broken. We need to figure out some way some new model or framework. So i wont go into the whole thing here. Well write about it in our article. We have a report coming out next week. By very simply, the first thing we need to do is update a bunch of offline laws that are questionably applicable in the online environment. So, what would that mean . Right now youre confronted with dark patterns which means user interface thats meant to trick you. Youre not given enough transparency, enough information and youre tricked. What do i mean by that . Its really easy to share a meme. Today theres a spliced up video of nancy pelosi tearing the state of the union address, its spliced to look like shes tearing it at times when she wasnt tearing it. How did i see that . A whole bunch of democratic members of congress are retweeting it saying that its false. You know, its so easy to amplify disinformation. Theyre going make it trending. And so its so the dark patterns make you make you take away the friction and encourage you to spread disinformation. They encourage you to give up your data. Its very easy to click to say, yes, go ahead with your cookies, its really hard to say, no, dont use cookies on me. We could go on and on. So lets update some Consumer Protection laws requiring user interface that works and transparency. Campaign finance laws obviously need to be updated and i would argue the honest ads act, its a no brainer. Its appalling it hadnt gotten a hearing, its a bipartisan bill but lets go beyond that. Lets do some know your customer type activities so that we dont if we go to all of the trouble of passing a law, it will be great if which had common standards but we need an api thats searchable so that Civil Society groups can figure out whats going on. But it doesnt do any good if the name of the group listed is secure america now or lets have a better tomorrow and you dont know whos behind them. So you need to be able to pierce that veil. The 501c4 is doing the advertising and no reason that the platforms shouldnt being do that and civil rights laws need to be updated so that the lawyers of public laws have proposed updating the common law for example for the online environment which would be really interesting. Then we think that there should be a and a number of people are talking about this, a pbs of the internet. This mirrors very much what the commissioner was talking about. We need to tax Online Advertising revenue and we need to create a fund like new jersey just funded for Public Interest local journalism. The revenue for journalism is being siphoned into the online environment. At the same time as these dark patterns mean that content mail as the atlantic writer called them potemkin outlets. Theyre really what i call instrumental journalism much less catchy than potemkin. Theyre there for a financial or a political end. Not there to be independent journalism. They look exactly like independent journalism that follows standards but with a masthead and a biline and corrections and so on. So that should be taxed. This other to be funded and then we need a white space like pbs has spectrum on the dial. It needs to be it needs to look different. Then we need to reinvigorate the multistakeholder model. The agency model is slow. It will take a really long time and Technology Changes all the time. So we would suggest a dialing back of 230 or clarifying really 230 in a few ways. So that there would be a built of a stick hanging over the heads of the industry and you can condition it on people on companies signing up to a code of conduct. To have some consistent standards across platforms on things like Fact Checking, moderation and also some of these ad rules, so on. Wed also suggest take a look at the Public Knowledge idea that maybe 230 shouldnt apply to ads. Of course you wouldnt be able to sue anybody on the hearst amendment on political ads but bringing the greater scrutiny might produce more Fact Checking, might produce a notice of takedown regime. Some groups are thinking should 230 limit liability from civil rights laws so that might be something you could exempt from 230 as well. But the basic idea is to update the off line laws for the online environment. To fund what ellen good win whose work i have been drawing on this calls the signal being drawn out by the noise of local journalism and to reevaluateinvigorate the multistakeholder model and we proposed the Digital Democracy board by it could be the ftc as well as the fcc as long as theyre given the capacity to do this kind of work. Thanks, julie and alex. Thanks especially for having me be the last speaker on the last panel right before the cocktails. I wanted to talk about federal legislation to address the election day crises. And there long had been threats on election days from natural disasters including hurricanes and earthquakes and newer threats like terrorist attacks. 9 11 occurred on a primary day in that state theres a new class from social media disinformation campaigns. And those threats on election day could take several forms. One would be to that actually change votes as the last Panel Discussed but there are all kinds of other threats that could occur on or around election days including the hacks of the Voter Registration rolls and could falsely indicate that voters had already voted. So some people talk about if you have paper backups for the polling books thats enough. But i dont know how that handles the situation if somebody has made it an pyre that you have already voted. As others have talked about too, there could be attacks on the electrical grid especially in jurisdictions that are close where where the votes are expected to be close. There are all kinds of social media disinformation about where to vote. There were things that people said you could vote by text and it gave you a number to send your votes in to. And that would lead people to think they had voted when they hadnt. There could be fake news about terrorist attacks that hadnt occurred. And its interesting in that 2014 before anybody even knew what the Internet Research agency was and this is outlined in the 2015 New York Times magazine article, the Internet Research agency staged a Fake News Campaign indicating there has been a chemical explosion in st. Mary parish, louisiana, and they did things like they had fake cnn webpages and videos and it led to the fair amount of panic there. At the time, everybody said why would the russians want to do this in this little town in louisiana . And the answer was i think they were practicing. Also, you know, some people say, well, these threats arent that real. But of course the federal government together with local and state governments for the last few years has been engaged in all kind of contip jensie planning to deal with these threats and others and people who have some knowledge are very concerned. And in 2016 it was russia, but now of course there are many other nation states that could be involved. Including china, north korea and iran. I think people are particularly worried about iran after the u. S. Assassination of the iranian general. I think part of it too in terms of not looking at just actual hacks of votes is to get back to what the russians in particular are trying to do. And as many of the intelligence agencies and the Senate Intelligence committee have written, its this long standing part of the russian active measures efforts that really go back to the beginning of the soviet union where the attempts were made to destabilize western liberal democracy. Its really the internet thats plead that much more feasible to pull off. And theyre trying to fan the flames of divisiveness all over western democracies. And that includes not only the 2016 elections but also brexit. The recent french elections. The catalan separatist movement and many other places around the world. And although congress has always had Broad Authority to regulate a lot about when elections are held and how they work, there arent any federal statutes that really apply to dealing with election day crises and postponing or redoing elections. On the other hand, you would have thought that since there long had been the other threats like hurricanes and natural disasters and even more recently terrorist threats that the states would have a well thought out body of law on how to deal with those things. But actually, a lot of states dont have themselves any statutory law and a lot of whats out there is very inconsistent. And that might have made sense when you were dealing with things like mostly hurricanes where maybe they would occur in geographically areas, local conditions might mean you need different kinds of responsibilities but i think with the new kinds of social media disinformation threats and the other kinds that i went over, theyre going to require a high level of technical sophistication to address. Theyre probably going to occur in more than one place than once. If theyre inconsistent approaches in different states including none at all, i think that would feed into the very purpose of the state actors behind them of sowing doubt about the legitimacy of outcomes. Regardless if there are statutes that address these issues, lawsuits will still get filed. Challenging outcomes based on due processes and other claims. In there arent well thought out statutory schemes to deal with them, i think those cases will take a long time to resolve and the results will probably be inconsistent and unsatisfactory and going back to bush v gore i think theres good evidence of that where the Supreme Court at least purported to start the florida recount based on the lack of clear statutory guidance about how there should be or what the standards for recount should be and looked at the deadlines under federal law and in particular the safe harbor date for states to certify their electors so that the congress couldnt challenge them. So to me with all of that in mind, its clear that theres a need now for federal legislation to address these kinds of election day crises. And doing that would also help with having more consistent responses to these other threats. I think the issues are very complex. So i wont try to spell out what i think all the answers are. Only that i think that it really merits discussion and the problem in a lot of these scenarios wouldnt be that the votes were close. Probably, you know, there would be enough votes that had been cast but should states certify votes if its clear that these kind of attacks have led to many people not being able to vote. And to go ahead and certify electors or victors in elections again would help to destabilize the legitimacy of those elections and democracy. So with those thoughts in mind, just a few thoughts on the shape the legislation can take. I mentioned the 9 11 attacks occurred on a primary election day in new york and new york at the time didnt have a clearer statute. But like a lot of states it had Emergency Powers that the governor could invoke and he ordered a redo of the elections and a postponement. It came under a lot of criticism because the order that the elections be redone statewide, he threw out all of the votes that had already been cast. Even though a lot of people had voted. So after 9 11 new york came out with a statute, new york election law section 308 that i think provides an interesting framework for addressing all these kind of threats and it allows the state to provide for an additional day of voting in affected areas only. So it wouldnt necessarily be statewide. And they dont throw out the ballots that were already cast. And its triggered by a showing that is a direct consequence of natural or human made disasters fewer than 25 of registered voters actually voted so i dont know if 25 is the right figure, but i think that law would provide a good framework for dealing with some of the newer cyber threats. As i said a lot of states have laws that put the authority in the hands of governors to order a redo. Of course in these contentious politically divisive times i think the hardest issue is who could decide whether there should be a redo or a postponement one way maybe to address is to require a double trigger to have a governor at the state but somebody else at the federal level. For all kinds of reasons, maybe it shouldnt be the president who gets to make that call. Especially since a lot of the crises that people have looked at is what happens if the president doesnt leave if he loses. So you wouldnt want to make it easy for the highest elected official in the land to say, no, theres been some Serious Fraud and we need to do a redo. So maybe the gang of eight in congress could be a place. Im not sure. And finally, i think i did have one slide which is just to show the president ial the time line for the president ial election for this year. Most of those dates are set by congress except for election day. I think maybe iowa is a good example of that how something goes wrong and people are scrambling to figure out how to address it. If things like that happen on a much broader scale on a president ial election day, again, in bush v gore they said well, we have this safe harbor date coming. We have to just stop the recount. I dont think thats the best way to handle serious questions about the legitimacy of a vote. So there are theres a lot of room to build in to make sure thats more time and you could even move back election day, make it easier. It doesnt have to be the first tuesday after the first monday in november. That might raise its own issues but even with some of the other dates i think theres room to build in more time which is probably something thats useful to do. And im happy to take questions later. Thank you, all, for a set of thought provoking talks. I want to cut about what you mentioned, commissioner weintraub and patrick, which is about limiting photo Data Collection or use, how we intervene at this point of microtargeting. In the voter privacy act, the approach you use ordinary that senator feinstein and the other cosponsors used is by focusing on the regulated entities and thinking of a disclosure regime but then also a user restriction requiring consent. Commissioner weintraub, youre thinking of pulling back one one level bigger by thinking about payment structures for this information. It would be really interesting to tease out the different approaches a little bit and i think starting first with you, commissioner weintraub, because you got caught off in your explanation of that theory so i want to give you more time. I want to be clear because i have a theory on microtargeting which is a proposal that im kind of encouraging the platform platforms on their own. Which one do you want . Go for both. All right. So were servicing ideas. All right. So in all of this we have to be really careful. I have to be really careful. In terms of what the government can do without intruding on peoples First Amendment rights to express themselves so i have on the one hand i have proposed that the platforms when theyre looking at political advertising that they should limit the microtargeting in order to allow for more counterspeech. Thats the classic First Amendment response to, you know, you dont like what win person is saying you can raise your own argument in opposition and the platforms have come up with very different approaches to this. So twitter said, well, were just getting out of the political advertising game altogether which struck me as a little bit draconian and perhaps cutting off a productive venue for people to engage in political speech and in fact they have found that its not so easy to define what is political advertising and they have gotten some complaints about that. Google i think has come up with the closest approach to the one that i recommended and therefore i think its the best approach of the three major platforms in that that they said theyre not going to microtarget below zip code range. Theyll do it by age or gender. But they wont go below zip code range. Based on complaints from mailers that go out won zip codes if you blanket a zip code youll find somebody that does not agree with the perspective in the political information that youre getting and they will be the ones who are motivated who are engaged in counterspeech or complain to the ftc sometimes. But facebook has taken a hands off approach on this and said well, we studied it all very carefully and decide thated decided that were going to do very little. So that was disappointing. And im hoping that with a little bit of with some public pressure they might change their mind but i havent seen much information that theyre inclined to do so. In the symposium article and i really did give it short shrift because i was trying to cover too much, but one of the problems with looking at section 230 of the Communications Act is that even if one were to cut it back in some ways and own the platform and open up to some liability the question is who would have standing to sue under those circumstances . Because if its what youre if what youre talking about is a harm to democracy and everybody is suffering from the same harm, who has the individualized harm to sue . You know, under taxpayers you know by analogy the taxpayer doctrine maybe nobody. Thats a problem. Which is why we decided to look at it in terms of just adjusting the costs involved. So instead of them getting this freebie of all of your personal data that most people do not consciously agree, you know, you get these long terms of service and they ask you to click on it. Like who ever reads that stuff, right . So they are in effect contracts of adhesion. I know theres you know, there are various movements out there to delete facebook or other platforms. But you know most people feel like they need to be online in order to engage with their communities. So if the price of being online is that you have to allow them to suck all your data out, you know, i dont know that people feel they have a meaningful choice about that or if theyre even consciously is giving this up the way that the apps interact with each other. I mean, its really kind of terrifying the amount of information my cell phone knows about me. And this is very different, you know . My television doesnt know this about me. And my mail doesnt know this about me. And my telephone doesnt my land line doesnt know this about me, but my cell phone, oh, my gosh, that knows so much about it. Some people in the audience are going to debate you on that, but you can continue. Okay. But the internet of things, you mean . Thats scary too. Yeah. And, you know, we bought a lower grade washing machine because i really didnt want my washing machine to be connected to the internet. I doesnt have to text me when the laundry is done. But sorry. Im starting to sound like a luddite. But the point is that by making them pay for this i think this really is you know its also sort of a Consumer Protection approach because i believe this data belongs to each of us. And if they want it, they should have to pay for it. That will really adjust the economics of how they manage their advertising and perhaps will introduce some more friction into the system. So this is interesting. So youre really pulling back to the different macro level, but not sounding like an fcc commissioner right now. No, big picture, big picture. When they get wind of this idea theyre going to wish that i was merely suggesting amending section 230. You heard it here first. Patrick, id love for you to engage on this around talk a little bit more about this the theory of the case on microtargeting and what it means to limit the use for the technologies. Yeah, i think its fascinating what the commissioner is proposing. But what we will set out to do i will read you my favorite Cambridge Analytica quote. It says cambridge described their psychological operations and thats in quotes as changing peoples minds not through persuasion, but through information dominance. A set of techniques that includes rumor, disinformation and fake news. Obviously on the face that kind of behavior is fairly pernicious, but the question for us then became okay how do we mitigate the effectiveness of these types of techniques without being able to address the content . So as i pointed out in the studies its very clear that the more data they have on each individual person the more effective the messaging becomes. We tried to cut the link between the data thats available to political entities and they can carry out the psychological targeting efforts done without an individual votersknowledge and allow them to return to the period of political advertising which is probably what were all familiar from when we were younger. Sending them to broadcasting in some fashion. So its really addressing microtargeting through a different tactic. I use the term sometimes and nobody seems to like it except me but its tank smart weapons of the information era and turning them back into dumb weapons by cutting the targeting capabilities. You cut the data, but it becomes less effective when you dont know how to be perceived by each individual person. Its really getting at the same issue i think in terms of pernicious effects of microtargeting, et cetera. But we found in terms of the Legal Environment and the of course First Amendment issues that the data was really the thing that we had the best chance to go after. Just to add one more thought. The part of what were trying to do here in going after the sucking of the data rather than the mers messaging were looking at content, not conduct. Thats an important distinction for First Amendment up differences. I would add to patricks point. We have talked about this, that in europe theres discussion right now about whether collecting political and philosophical data about individuals and by collecting they mean inferring as well is compliant with gdpr or compliant with the underlying eu privacy law because its considered Sensitive Data. So there was a halt in the last election, the uk information commissioner officer has talked about this. And what they do with that kind of Sensitive Data is you have to opt in, not opt out. So you can imagine if a company had to say is it okay if i infer your political beliefs based on your magazine subscriptions because i would like to microtarget you with nonsense, you might not say yes. And i wonder if you know, i think its an interesting question about limiting that at the collection side. If thats more First Amendment friendly than some other ideas. I mean, i dont think that the commissioners ideas are less friendly, but some other ideas i have heard. The other thing i wanted to do with the bill given its an uncertain First Amendment environment we wanted to set up if there was a challenge and there was a balancing of constitutional interests which i think there would be, we thought it was advantageous to have not the government preclude the use of a class of data but instead any individual conduct as the popular sovereign, visavis the candidate who is seeking that vote for office. And i think personally it would be an odd precedent for the court to find that a candidate seeking a vote has more of a right to your personal information than you do. So we wanted to rely on that paradigm because we thought it was an advantageous comparison of rights. Another paradigm that you really saw in the introduction of the bill is making a clear distinction between manipulation and persuasion. I think back to the panel earlier in the day what would you do if you could wave a magic wand, it was Getting Better clarify around that distinction. I would love for you to engage on how we get better on that distinction and the clarity of that data. You discussed two theories so ill defer to you. The one thing i want to throw out there is that i dont want i dont want us only to focus on the ads. I think theyre pernicious and really important. But as we learned about theres a whole world of third parties out there and instead of playing playing paying the platform to run ads you can also pay a third party to gin up bots or trolls or influencers to whats called organic which is not paid messages and what they can do is just flood the information space. And this is a trick that authoritarian governments use all the time so youre not really sure that other people agree with you. Youre not real not that theyre trying to convince you, but theyre trying to confuse you and drown out again drown out the signal with a lot of noise. And i think we need to Pay Attention to that too. One of the things thats happening as the platforms develop policies is theres a movement and also as we all become aware theres a movement in tactics. I think that organic pushing out of disinformation is one that we have to become really aware of. Not only as it microtargets us, but corrupts the information in the ecosystem. A great point. Go ahead. I would go after the use of bots in general. I mean, yes, the court has said that corporations have First Amendment rights but i dare them to say that robots have First Amendment rights. And i believe that i was talking to nick about this in the break about whether the companies have the technology in order to detect the bots to take them all down. And i would really love to see legislation around that, going after the use of bots. Patrick is eagerly jumping on that. Im trying not to do it eagerly, but in a restrained manner. So when i was with senator feinstein she introduced the bots disclosure and accountability ability. The social media platforms have to have the users to disclose the presence of a bot. If you operate a bot on your twitter feed that says every time the New York Times puts out a story out you respond automatically that its fake news, you have to disclose that its a bot and theres some helpful data that suggests that humans once theyre aware theyre conversing with a bot they find that information much less persuasive. So i think the disclosure in the field is hopeful. Then you deal with the first same First Amendment issues you talk about how its delivered through the computational artificial means as opposed to what the bot is saying. I completely agree. Back to the National Security field theres a lot of great reporting on the use of bots around important events in the 2016 election. So if you look at after the night of the first debate, Something Like 25 of all the bot traffic online excuse me, all of the conversation online about the delegate was done by bots so its a significant issue. I think theres a real policy rationale for a disclosure regime that may not be as far as some folks want to go but thats one way that could be done. I agree. That would be helpful and disclosure is something that the Supreme Court is pretty solidly in favor of so that would work. I would echo that and then add on top of that that we need a disclosure regime for things like deep thinks. If somebody feels they have a First Amendment right to alter the video, it ought to have a disclaimer on that this is manipulated video. Go ahead. Back to your question about manipulation versus persuasion, i had looked at some approaches to section 230. I dont think disclosure is most effective. I dont think people care when somebody is behind something but i look at a right of reply coming out of a cable and broadcast background. You know, theres long been things like the fairness doctrine and equal time. And the courts on the First Amendment grounds, you know, they expressed some skepticism but i think under the First Amendment there is some room to develop Something Like that. And especially in terms of section 230 which is such a broad grant of immunity. If you could condition the immunity that it provides on some kind of right of reply to people who are affected maybe that would be more effective than a disclosure obligation. The only thing id add about the manipulation is i do i maybe disagree with you a little bit and i think that obviously the tactics of the bad actors are to hide their identity, their manipulation. You know, launder it so otherwise they wouldnt bother hiding behind a c 4. You know, they wouldnt bother pretending that its a news outlet. They wouldnt bother making it look like realistic video when its a fake. To me, right now its the antidisclosure so you make the totally hyperpartisan news outlet that has no masthead and no Fact Checking and every piece is an opinion piece, you make it look exactly the same as a traditional i hate to use the word traditional but independent journalistic standards and it has cogent standards and its the content mail is arbitraging the trust, stealing the trust that you build up in the supply chain of news that comes out in the independent journalism. If you can undo that pattern that its understood that its a different kind of communication whether its a bot, an outlet, a deepfake. And so on. I think that would be it has to be part of it has to be part of the manipulation and undoing it has to help on the margin there at least. Right. What it needs to be disclosure with meaning which is your commentary even on the act. Wonderful to have the disclosure but needs to be an accessible there needs to be disclosure that people see and use. Not to hog the mic but i think theres realtime disclosure and then action reports and i think they serve different purposes. The realtime exposure it cant be click here to see more. It has to be intuitive. It has to happen before youre tempted to click and pass it on. It has to be something that the individual can make use of. After action, like we have a black box in the airplane goes down so we can figure out what happened its handed over the faa so they can tweak rules or somebody can sue the airline after they found out what happened, we need actual action reports that say searchable, you know, that Civil Society groups can make use of. So the ad database is Something Like that. Or moderation log or, you know, what was taken down so that you can appeal. These kinds of extra action things are really important but its not to slow the credit of disinformation but more to figure out penalties and help society understand whos trying to do what. Let me ask one more question and then turn it over to the audience. Get thinking of your own questions. As a final point i want to pull it back to the reinvigorating the multistake holder process. You run into the challenge of how unified do you want the platforms to be in their responses. There are areas and it came up in the presentation earlier today theres strong consensus about what misinformation is, antivax information, there are other areas where its much more contested. Even in thinking through one of the best practices for the platforms, reasonable minds could disagree. There could be room for experimentation as platforms take more aggressive approaches towards taking down content. What an agency would do, putting the pressure on the platforms to have kind of the more thoughtful laid out approaches how do you reconcile the tensions and think about that . I would pick up on the earlier remark that you should focus as much as possible on practices and not content. So if you have Fact Checking rules they should apply to everyone. They should be certified fact checkers. You know things like that. You should who do you rely on the definition for white supremacist groups . You know, which you know, which kinds of, you know, disclosures are you going to have, things like that. I think you focus on practices as much as you can. And consistency of rules. And who they apply to so you know should you exempt candidates, shouldnt you . Potentially the floor versus the ceiling, do you see room for flexibility . Yeah, i think you want to have practices that are thats the point of having the multistakeholder model. Instead of an agency figuring out how it should work seven years after the technology has changed that the you have the actual players in the room and its a little bit more dynamic and they can make it work for the technology of the moment and then they could adjust it more dynamically. And, you know, theres art in that more than science. But thats the idea that you focus, that the agency would set some kind of floor or that and that they would come up with some practical way to implement it and then they would be audited against the practices, not the content. I guess i get a little nervous about uniformity, especially thinking Technology Changes so quickly. If you dont allow anyone to experiment and you get stuck with past solutions that wouldnt really be ideal. Even though i think facebook should take more responsibility, i am sympathetic to their point that they really shouldnt be the thought police. I dont think theyre very good at it. And thats why, to me, i was focusing more on the idea of having more speech like through a right of reply rather than restricting speech. If i could just make one comment on the last piece you made, i wish the conversation between facebook and twitter about political conversation i know were on time here. I wish the discussion wasnt about whether or not there should be political speech on a platform. And again, to return to the data as i have done throughout todays conversation. I wish it was can we use the psychological and in depth profiles weve developed on each individual person to target them with that. So, i would be fine with a policy that used zip code or any other measure that applies to a broad class of people. Its not as much about whether or not a candidate should be able to put their message out on twitter or whether someone has to be the fact checker or the thought police. Its can we use these highly sophisticated targeting techniques that social media platforms have developed to send that to people throughout the country. I wish there was more conversation about the data and less about should we control whats in speech, should we be checking if its true or not, and i think thats left out. I think theres a for that though. Were thats not left out of my proposal. But the reason is it cuts into their profits, right . They want to the platforms have a real financial incentive to say we cant do any of this because we would be the speech police. We would be content we would be restricting content. We would be violating the First Amendment if anybody tried to legislate that although at the same time they sort of have kind of asked for a little bit of regulation although i think that might be a little bit disingenuous because they know its not going to happen because it kind of takes some well, its not our fault. We dont want to do this. Were waiting for guidance from the government which for a variety of reasons is unlikely to happen. But yes, i think it can be we can talk about conduct rather than content and not make anybody be the speech police. But it is going to cut into their profits and thats why they like to frame it that way. All right. Were going to pass it to audience questions. Please, take it away. Thanks very much. Ill be quick. I was wondering if i could ask the panel on elaboration on the disclosure of the black box that karen referred to. I guess part of my perspective is as a researcher when im going into a lot of tech companies, the answer i often get when i will show them a result ive done this with google, facebook an algorithmic result, they dont have answers even for how a system works. So, with the full access to the data, theyll say these are dynamic systems. Theres a lot of testing going on. Theres a lot of systems that cannot be dialled back. The black box is not really possible to construct to replay it after the fact. So, i get one answer. One response is to say i cant do that for you. I cant give you an explanation. So, theres a bucket that says explainbility is actually really hard. The other bucket of response is a trade secret response that says i cant actually keep talking to you about this because youre getting close to things that i really dont want to talk about and cant talk about for various reasons. So, as a researcher, i encounter these two moments and im wondering if theres a regulatory response which is to say one thought is if you cant explain something, maybe it shouldnt be having the public role. So, its the dont build it to begin with answer, and maybe thats true with voting as well. And then the other answer is to say is there sort of a public exemption and im not a lawyer so im probably using the wrong words but a public exemption to a trade secret protection when that technology is having a Public Function . So, you can build it, sure. But if it misbehaves, you have to tell us about it because it had a Public Function to it. So, i that was a statement really, not a question, i apologize. But im just wondering if you can walk me through on the regulatory thinking on is that crazy, is there a third option . Thats great. I love that question. I dont really have an answer for it. But as far as how the algorithm works, i draw analogies, regulatory analogies, and in the fda context it has to be a safe product and it has to the fda is a very flawed agency, but it has to be a safe product theoretically. It has to do what it says its going to do. So, there are, again, conduct kind of measures. What i think about is, you know, after the 2016 election, the only reason we found out as much as we did is that the Senate Intelligence committee demanded data from the platforms and then we found out how the extent to which africanamericans were targeted. We would not have known that otherwise, how many rallies were organized by russians. We would not have known that otherwise. Fake groups. So, there has to be that kind of transparency. This is our elections are happening on private platforms, and we need to have that kind of so, i think thats the kind of dynamic that you some type of conversation about the algorithm that you would have as part of a multistakeholder code of conduct forming. The ip question is an interesting one and i dont have an answer for it, but im sure somebody in the room has thought about it. Im going to move us on to julies question. As you answer, you may also want to respond to the previous one. Julie. This is almost a different way of asking the same question that mike just asked. But i was trying to sort your comments into buckets based on kind of your point of intervention, right . So, theres one bucket for sort of postcrisis response. My questions not really about that. There is one bucket for transparency which might become postcrisis response. Theres one bucket for targeting, right, which is how particular voters are singled out to receive messaging. And there was one bucket for sort of taxing and funding. Theres another bucket that none of you really spoke to which except just now mike brought it up which has to do with the algorithm and when it has come up, you used an interesting word which is organic, right . And if you take on board the ramifications of that word, right, its like its a tree, right . Its natural. And yet it ties into a lot of what the first panel talked about as the performative dimension of information consumption, people circling to their networks. But to use the word organic about that process kind of begs the question of how weve all been conditioned to hit like and ha ha and hate and to just kind of reflexively press buttons. I watch my teenager. He can press it like 100 times a minute. And the theres a fallacy kind of embedded in that characterization of that as organic, right . And its i suspect you ran into the trade secrecy problem, right . But the companies that are sort of leveraging the data harvesting and revenuegenerating potential of organic spread know rather a lot about the patterns of organic spread and presumably could interrupt it, right . Just like you could chunk voters in zip codes, you could interrupt Viral Network spread because if youre constantly sitting there tweaking your algorithm to spread that stuff faster, presumably you could do it the opposite way, right . And yet i never hear anybody talking about that, right . I am just waiting for the content moderation debate which is all sort of whackamole stuff thats up there to talk about this other thing. I was wondering if i could prompt you to talk about that as a site of regulatory intervention which might be, you know, a way of talking about competencies the government would need to be able to do that because i think theres an arguable gap there. But i would like to hear you speak to it if you would. Sure. Okay. So, i feel like we should have dinner and talk it all through again. But, so, one of the things i was trying to talk about was the lack of friction and the when i talk about the dark patterns and the light patterns, how manipulative it is in real time and how easy it is to share. So, thats thing one. Thing two is i think theres more to that information pollution than either ads or the algorithm. I would use organic with quotes. Organic means not paid. Just like political campaigns before the internet we talked about paid media and earned media which is when you have an astroturf event and it gets on the news. When these networks, bot networks or influencers that you can pay or real influencers, and there are ways to manipulate. Even real people that are influencers, they have huge followings and act Like Army Generals and send something out and everybody follows suit. I think theres more than just the algorithm thats going on. And i didnt get i have so many things that i was trying to talk about. I didnt get into all the things we would think of in the code of conduct but ill just go through some of them. One would be Something Like consistent political ad definition and rules including candidates and political figures, using fact checkers, committing to a time frame for review. The way it works now is a journalist flags it orb sm else and says is this true or not, then it goes to the fact checker. They dont fact check it if its opinion. They dont fact check if it claims to be satire. There are very few fact checker plus in the world and they get back and the platform decides what to do with it. Doing something about that, consistent standard for whats fact checked, whats penalized, you dont tell the platforms what penalties they should have but it should be clarified. Thats thing one. Thing two is best practices. This is a senator warner idea with which i love. Theyre there conducting research on users by tracking or testing. Youre in a skinner box. So, using best practices on conducting research on humans, deference to civil Rights Groups and definition of hate groups, promoting voter information, redirecting the way they do for isis, for other things. And this gets at what you were talking about, deprioritizing engagement in the design of algorithms and providing users options for tailors their recommendation algorithms. Harold talked about ideas of altering the algorithms so its not entirely for engagement. There are ways to think about that. Hes got interesting ideas. An appeals process, and white listing of news and other scientific information. So, that was those were some of the ideas we were talking about in terms of the code of conduct. Some of it moderation and some of it a little bit broader. Obviously this is just some of the a paper tiger for people to throw things at. Algorithm one is probably the trickiest. Ill just throw in that roger mcnamey who was an early mentor to Mark Zuckerberg although he now has concerns about what facebook has become, thinks we should switch over to the subscription model so that the platforms would not have the same incentives to promote sri ralty and outrage just to keep you on the site long enough to see more ads. I think that would be difficult to do at this point, to unwind a billiondollar business model. And i think there would undoutably be First Amendment challenges if there was legislation to address it, but thats another model for it entirely. Well take another question. Thank you for the presentations. This was a really great panel. One very quick comment and a quick question as well. For ellen and patrick, i definitely agree on the topic of bots i think transparency or disclosure is something ive been advocating for for years, precisely with the idea that if people see something is promoted by 13, 000 bots and 5 humans, theyre probably not as interested in whats in the tweet. And i think this is in the technical abilities of the companies although im sure youll hear its not possible. On that note, ryan kalow, an academic in seattle has done a lot of great writing on public regulation of bots and also the manipulation and persuasion question as well. I would refer you to his work if youre interested. My question is for patrick on the Cambridge Analytica thing. These papers that you recommended, you seemed to say by multiple engineers are interested in whether you think this is corporate research, this is what you tell people to look at, or if you thought it was real, robust, Academic Research that was proven. To my mind the biggest question remaining is whether or not the stuff works and how it influences behaviors. I would love your thoughts on that. Thats absolutely the right question. Thats the one i was speaking on the same issue on a Panel Last Month and my counterpart was arguing the point that you just raised that this is a lot of spoke and mirrors and theres nothing to see here. I think one interesting point that comes to find, there is a similar paper in the pnas journal that contests the findings of the mass to joe persuasion directly. And one of the subnotes to that paper to the inverse of your example is the individuals who wrote the challenge paper were actually received funding from the social media companies. Ive also found and this may be a National Security tin hat piece every time the russians say it doesnt work makes me think it works. Every time someone profited in a way, every time since theyve been caught say it doesnt work makes me think it works. Im not a data scientist, so i cant say from the findings. In terms of whether or not it warrants a policy response, i think its published in a reputable journal is reason for concern. And i think the cost of not regulating in this space far exceeds the cost of regulating even if it doesnt. So, to me, theres no reasonable objection to moving forward. But yeah, i did think it was funny that the one paper i could find that directly challenged it, first of all they omitted in the first instance they received funding from facebook and google and were forced to clarify that they did in fact work in an institute that received that funding. So, we can all draw our own conclusion. [inaudible question] thats right. And they omitted that in the first instance and had to clarify. Final question. So, the topics of people getting value for their private data was discussed, especially by the honorable Ellen Weintraub. And as i said in my first question im a privacy person. When i was getting my degree, one of the professors i had was someone who studied a lot of the behavioral economics of privacy, trying to quantify how much people value their privacy and their private information. And really seems in practice that while it is extremely, extremely vulnerable to how you phrase things, how you present things, it also looks like people dont actually put much monetary value on their private information, especially when they are not having constant broadcasting of messages that oh no, your privacy is being violated. So, if people are willing to give up their private information, what is the appropriate thing that can be done there . A small question to finish this up. Sure, sorry. No, its a wonderful question. Its just a hard one. In my proposal, we make it nonwaiveable. You cant just give it away. And ive seen studies on this that people overrate the value of it

© 2024 Vimarsana

comparemela.com © 2020. All Rights Reserved.