comparemela.com

Card image cap

Day. Challengers in elections, oversight and regulation. For those who didnt see me at the top of the day, im alex gibbons and run law policy at the law school and happy to be hosting you today. Many key areas of challenges, somebody was just saying that theyre feeling a little depressed and we need to try to get the energy up on this final panel of the day. Issues from the fragmenttation from political discourse and new pathways for misinformation, Voter Suppression to election security. This panel is going to add another challenge, the significant challenge that we face in election oversight and preparedness. But also, talk about the Solutions Space as well, which is quite simply what do we do . We have another phenomenal lineup to help answer these questions today. Sitting immediately to my left is the honorable ellen weintraub, the chair of the federal Elections Commission are, right, indeed. Well talk about that a little over the course of the panel, too. She has served as commissioner on sec since 2002, and previously worked in the Political Law Group of perkens cooey and house ethical committee. And next at cloud fair, a web performance and security company. He previously served as long time staffer in the senate where we were colleagues working at National Security council for senator dianne feinstein. The honorable ambassador to the organization for Economic Cooperation and development is now at the German Marshall Fund where shes a founding director of digital and democracy initiative. Finally at the end, Mark Lawrence appelbaum, a georgetown graduate who recently completed on foreign election interference and online threats in u. S. Elections. Well have the same procedures we have in previous panels. Professor weintraub will begin and then down the order. Are we on . Thank you, alex and georgetown for inviting me and asked to talk about regulations in 10 minutes or less is a bit of a challenge. What im going to do is hit on a few of the challenges im personally confronting at sec and then move into a discussion of the article that ive submitted to the symposium issue along with my coauthor tom moore, a proud georgetown law grad. Thank you, tom for helping me to get all of that together because i can assure you without the efforts, georgetown would not be in possession of a draft today. Okay. So number one challenge for me, an election regulation is that we do not have a quorum to make decisions at fec. Were supposed to have six commissioners, were down to three. We lost one, and one two years ago and one five months ago and why none of those positions have been filled you would have to ask the president and the senate because theyre in charge of that, but that is that is huge and it means that we cannot launch any investigations. We cant conclude any investigations, we cant do any rule makings and we cant issue any advisory opinions. So thats a bit of a problem. Although honestly, the second challenge that i confront and have been for quite some time is that even when we had a quorum, it was extremely difficult to get anything done because the commission has been for some years now, extremely eyideologically divided. Fush if youre an equally divided commission, the republican and democratic sides have different views whether any regulation of money in politics is indeed advisable. One example of that is a rule making that has been ongoing, believe it or not, since 2011, just to clarify the rules for disclaimers on Internet Political advertising and we were really pretty much at an impasse and i wasnt getting much engagement on the other side, frankly, for some period of time before we lost the quorum. The Commission Last did a comprehensive look at the internet and politics on the internet in 20062007, that has got to be about a century in internet years ago and there are large areas that are unregulated that really need another look. We saw recently a case where a super pac and the Hillary Clinton campaign were alleged to have coordinated through a whole bunch of communications over the internet and their argument was, well, theres an exception for communications over the internet, except for paid advertising on another persons website. That wasnt this so there forewe could do all sorts of stuff as long as the end result was on the internet. Counsel and two commissioners disagreed with that, but i think that interestingly enough, although it was democratic respondents, it was the democratic commissioners who wanted to proceed and the republican commissioner who blocked the investigation. So that was a problem. Weve seen the internet used as a way of sending both very open messages, candidates posting broll on video on their websites in order to have super pacs pick it up and use it in commercials, and although theyre not supposed to be coordinating as well as more subterfuge using the internet. We have coded messages being tweeted out and a debate over whether that constituted public information, if only the people who knew where to look could actually decipher the messages. So weve had what i described as a digital needle in a virtual hay stack, so weve had a number of challenges at the fec, as i said, even before we lost the quorum and congress is similarly having problems getting anything done, also due to polarization. Its very frustrating to me that the honest ads act hasnt passed which would bring Internet Political ads under the same frame work as broadcast ads and i would also desperately love to see the Congress Pass the deter act or Something Like that. Bipartisan proposals to address foreign interference in our elections by imposing really strong sanctions on anyone who would try it. I dont know why we cant get common sense rules like that passed. Why is all of this important . What goes on on the internet in politics . One third of Young Americans rate social media as the most helpful source of information about the 2016 president ial election, this is after 2016 according to a pew poll. Digital political advertising increased 260 between 2014 and 2018 and theres projected for 2020 to reach 2. 8 billion dollars. So, this is not a small venue. And it is there are, as ive said, large areas that are completely unregulated right now for the symposium. We decided to look not in the fica, but federal Communications Act at section 230. Whats been described as the 26 words that created the internet. No provider or user of an Interactive Computer Service should be treated as the publisher or speaker presented by another information content provider. So, theres this expansive area of exemption from liability from all the internet providers that was created in 1996. So, you know, if our internet regulations from 2007 are out of date, emergency looking at something that was written in 1996, when they specifically said, the sponsors, ron wyden, chris cox, they were trying to protect this baby industry, didnt want to strangle it in its crib. They wanted to allow it to grow. And grow it has. In the last, in the Third Quarter of 2019, amazon had income of 70 billion. Google had income of 40. 3 billion and facebook income of a mere 17. 65 billion dollars. That was just in one quarter. So, i dont know that we can fairly say that theyre still babies in their cribs that we need to coddle and protect, and while originally i think there was a lot of excitement about the internet as a force of political information that could be low cost, that could allow dissidents to find each other and mobilize and allow upstart candidates to avoid the big money raise and get their message out, i think what weve seen over the last number of years, theres a real dark side to politics on the internet. Internet companies, microtarget political advertising, which is a topic that i can talk about later if youre interested. They create filter bubbles. They create this atmosphere where counter speech cannot, because of the microtargeting, cannot really emerge. So, you get very narrowly targeted ads, directed at just a few and somebody else who might have different ideas doesnt see the same set of ads, they dont know that you need might want to provide you information to counter those arguments. At the internet and amplify political information and disinformation and we saw this coming from russia in the 2016 election, but what is kind of skir scary now were seeing some mimicking some of the soviet style campaigns. And i can imagine at ways of going against foreign interference, its harder from domestic sources. Theyre failing to adequately protect against foreign interference, at least at the point of 2016 where they were getting money for ads in rubles and didnt occur to them that that was a problem . Theyve caught that now and theyre not doing that anymore. The algorithms are designed to promote at all costs staying on the platform and theyve found that the best way of doing that is to keep people riled up. So the platforms, i think, are playing this really negative role in our civil discourse which is becoming, frankly, pretty uncivil. And they have a serious problem with inauthentic users and bots and theyve got kind of a whackamole approach that isnt really very effective. So we decided to follow the money on the way advertising is going on the internet. The reason that they are making all this money and theyre so effective is that they suck all of your personal data out of everything you do online. I mean, its really pretty scary, reading things like your flashlight is sending location data out and companies are marketing that. Who would think that your flashlight was collecting data on you . But because of section 230 theres this broad immunity because the platforms are not seen as the originators of the content. Theres been some pushback on that and there was one case where at least one judge was willing to say, you know, the way the platforms are packaging the information is something akin to a content provider that hasnt won over any thats an outlier decision of that one judge, the panel went the other way, but i think that that is it shows that people are thinking about it differently and really, what the platforms are doing, theyre not really operating like the phone company where theyre just transmitting information blindly to the pipes. Theyre taking an active role in selecting what youre going to see and personally, i would not be averse to seeing inroads on 230 on those grounds. However, we can get at it a different way by making them pay for information theyre stealing from all of us. A platform that was you know, started in a dorm room with a philosophy of move that and break things, seems to now that theyve broken things, their approach seems to be more like go slow and dont clean up your messes. So i think we need to be creative about how we go after these problems and one way would be to impose costs on the front end and reallocate the costs where they belong. I believe this information does belong to us and that if the platforms were forced to pay for the data that theyre taking from us, that that would create different kinds of incentives. Maybe they wouldnt have so much targeted advertising and in any event, we could reclaim something that belongs to us. So we are we think that this kind of follow the money approach might work and in addition, what were proposing is that there be kind of a surcharge of 5 which is described as a democracy dividend that could be used for things like funding public media, Fact Checkers, Public Campaign financing, other democracy enhancing approaches. And ive been told that that is the end of my presentation so i will stop there, but im happy to answer questions on any of that. Thank you, patrick. [applause] alex, could i borrow that clicker for the okay. Good afternoon, everyone. My name is patrick day, im a as mentioned im a senior policy counsel at klausware. I want to thank them for holding this timely discussion. Im here in a personal capacity though before i start to talk did voter privacy, i would be remiss if i didnt mention two programs that the cloud fare is information in terms of protecting for firewall protection and offering programs free of charge. There are now 174 domains with state and local government in 26 states using cloudfare Free Services under the program. And we launched a similar one for campaigns. I have nothing to do with those, im proud to be associated with them. If you want information let me know. I was National Security council for senator feinstein. Our committee was in the middle of inquiry into the Russian Election and i was the add to look into an analytics firms in the United States. Called Cambridge Analytica. And im sure everybody was familiar with this. This was the ceo looking for ukraine and trap politicians in a fake election in asia and stories that cambridge in an election in nigeria, used sensitive medical information and release it to the public in order to throw the election. That was interesting for our committee over time. Over time as we looked into the activities, i became much more interested in things they were doing in the open and voter data in the United States and how theyd come to occupy such a prominent role in u. S. Politics. Cambridge analytical worked in 44 races in 2014. 50 races in the United States in 2016 including on behalf of two of the Major Party Candidates for president. We could spend quite a bit of time talking about Cambridge Analytica, but i want to highlight three points i think are relative to our discussions today, particularly election in the data space. Over the course of our investigation two questions that i got requested most often, what is targeting and does it really work. So the targeting if youre not familiar with a term that was developed largely in the commercial sector for Online Advertising. The premise is that using your individual personality traits where i inferred from data gathered about you online through social media like facebook and measuring your openness, extroevert, and its likely to predict your behavior and alter that. The objective is to make you buy pants or shoes, et cetera. It has a different context in the electoral context. Three studies on the slide were referred to us by cambridge and i thought they stand for three important principles or things i wasnt aware of. The first is that private traits are predictable from your digital footprint. So, innocuous facebook activity like liking katy perry or the super bowl or sneakers from that information, youre revealing highly sensitive personal information about yourselves, calculated by algorithms. So, they found they could use facebook likes to accurately predict sexual orientation, ethnicity, religious views and personality traits as we discussed, intelligence, happiness, use of addictive substances, parental, and information you may not be revealed publicly is new available through the algorithms. And the second piece, computers do a better job figuring this out than humans. And by facebook likes you they could determine your personality better than your spouse. And the third right, i have no additional comment on whether thats a valid measure or not. And the third piece where it hits home, digital mass persuasion. They found that by employing the psycho graphic techniques and a real world study of 3. 5 Million People and found by using individuals underlying psychological traits messaging them. 40 more clicks and 50 more purchases through the campaign. And ill just read one passage that they put on the front of report why it was important. Digital mass persuasion could be used to covertly exploit weaknesses in peoples character and persuade them to take against their own best interest, highlighting the need for policy intervention. The second point about Cambridge Analytica and i think the quote is there by the picture, and unfortunately, i wont do it in a sean connery impression, but we know a couple of things about Cambridge Analytica and the russian government. We know that the ceo was briefing individuals with russian intelligence on u. S. Voter targeted activities and data presumably the psychographic, and they same time russia was targeting u. S. Voters in the United States to engineer a specific electoral outcome and we also know that theyre doing it pretty much all over the world. The second piece is, which i think is interesting, though of a different flavor, 2019 report from oxford, the Computational Research project, they found evidence of organized social media manipulation to shape domestic Audience Perception in 70 countries in 2019, up from 2018 and 2017. The third point about Cambridge Analytica, this was mentioned before, they werent alone. Cambridge no longer exists, however, and i just took a quick collection of entities, im sure there are more, these are groups that either employ former Cambridge Analytica staff. Contracted with Cambridge Analytica in the United States on to similar clients. One of the conditions on the slide, i wont say which, reportedly is offering an app that was developed for a u. S. Politician to collect voter data in Eastern Ukraine on behalf of a russianleaning ukrainian candidate. And if you know which one. The policy response, it created the detail that i relayed to you, created a pretty compelling reason for senator feinstein to introduce legislation which she did. She introduced the so i havent done power point presentations in a number of years and i was under the impression that that was part of this presentation so i put these together and i apologize that the graphics arent illustrating as they did on my monitor. Anyway, the vote of privacy act essentially takes data privacy principles that you may be familiar with on the gdpr or the california privacy act and overlays them on top of the federal campaign act so if passed individual voters would be able to instruct entities regulated under federal election law how to use their data. If you dont want the dnc to use your information to target ads at you, you can tell them to delete it. Theres one you cant see on there, i think is novel in Data Protection rights and credit to the right to target youen based on personal information. A Third Party Website like google or could have more data on you. Even though you told them not to they could use the data from google or facebook and when you opt out the entity under the act on social media. And having a bill on fisa and not being a fairs amendment expert. The first thing youre going to have a problem. And most folks are familiar with, vermont data privacy statute that the court struck down implicated the First Amendment right for a speaker to have access to data as part of communicating with an audience and found that the governments interest in that case werent sufficient to sustain the regulation. The government said it was to help with privacy, in doctorpatie doctorpatient, and the First Amendment right to access data and the governments interests werent sufficient to sustain. The point is well taken that any attempt to regulate the use of data, the first obstacles is the First Amendment right of seekers to have access to it as part of communicating with a broader audience. And i think thats particularly acute if you this was in the commercial speech doctrine area, it becomes more acute as you move into the political speech doctrine. So i will say, i will try to wrap up with a positive point. I actually think the Silver Lining of Cambridge Analytica is that theres so much more information in the Public Record about how personal data is being used in the electoral context and i actually think even given the hurdles you might encounter with the challenge under sorrel information about how voters are being manipulated and coerced in addition to the National Security computetations of foreign nations using these techniques to engineer outcomes through our electoral process make a compelling case that regulation in this space could survive such a challenge and may actually be a National Security imperative. If today you brought everybody together to talk about these issues. Id love to just comment on the first panelist but our give my remarks and maybe talk to them later. I wanted to underscore, start by underscoring the urgency and then i will get wonky and go back to the 230 discussion and talk about what we might do in a broader framework. I dont know if any of you noticed but the bulletin in the atomic scientist recently moved the Doomsday Clock up to 100 seconds to midnight for the First Time Ever in the first item in the press release about that was disinformation. Because they said by reducing trust in crafting the information ecosystem, weve undermined democracy come under my our ability to do with accidental threats. No sooner did they said that their press release then i will prove them right. Among all the things we know about iowa i dont know if you noticed but Judicial Watch had put out a falsification about voter fraud before the end of voting on that day, and secretary of state, republican secretary of state in iowa debunked it and called for them to take it down. Each platform that with a completely different because we have no code about how people should handle it. Facebook was very responsible. They put a screen up over it so you could share it and lets you acknowledge you knew it was false. Twitter decided it didnt break the rules at all and it was left up. The exploited, they played arbitrage with the fact these platforms all connect, that would look at all of them and that spread among all the other disinformation. This is a just happening in the u. S. We hosted a panel yesterday and learn that many of the same thirdparty pr firms and think tanks in the media ecosystem are happening to a greater degree in kenya, india, brazil. The kenyan expert who spoke said she finished her first article about Cambridge Analytica right before the u. S. Election because she was so focused on what was happening there. So what happens . Like other people in the room i have been working on this policy issues since the internet was called the information superhighway. I was in some of the rooms where its happened that initial framework was put together for the internet. And if you sort of go back into the psychology there, yes, it was an infant industry and there was whose his idea she should e regulatory breathing space for an infant industry. Its also First Amendment. You can have the government regulating First Amendment, and it was also, where we were in this really deregulatory moment when we believe this technology was going to give voice to the voiceless and power to the powerless, but the deregulatory peace, part of it there was the sense that agencies are slowmoving and to take such a long time for regulations and would kill innovation and so this new model, its not it was completely laissezfaire. When you think of it that way because its called selfregulation but the idea at the beginning was this multistakeholder process that companies and Civil Society and others would get together and come up with the rules and the often did that with the threat of regulation behind them. They didnt have the threat of liability as you do in many other industries have had the threat of regulation and that worked on privacy. It worked on setting up the cash database on Child Exploitation at ncmic. So there was this model, and what i i like to argue is the multistakeholder model is broken. That the platform and agencies have suffocated it. I dont know if you noticed but a bunch of civil rights leaders wrote a letter to Mark Zuckerberg completely frustrated and even angry. They have been working on a civil rights audit with facebook and they felt they were not being dealt with in good faith and that there were not making progress. So much for this stakeholders, some society stakeholders working with the platform. They had a great, the platforms had agreed or facebook agreed to give data to budget for researchers to hold them accountable to something that was created called social science one, press releases put up the night before Mark Zuckerberg testified in the senate saying they were going to be transparent and accountable. A year later the leaders of social science also with a letter, complete frustration because they have not gotten that if they had been promised. So much for the multistakeholder process. And then the agencies are not stepping and even when they have authority. Weve heard about users being manipulated and tricked online, harassment is a feature, not a bug. The news media is being completely undermined. I would argue the multistakeholder model is completely broken and we need to figure out some way to, some new model, some new framework. I wont go into the whole thing here. We will write about in our article. We have a report coming out next week but very simply, the first thing we need to do is update a bunch of offline laws that are questionably applicable in the online and barter. What would that mean . Right now you confront with whats called dark patterns, which means user interface thats meant to trick you. You are not given enough transparency, enough informatn and your tricked. Its really easy to share meme. Today theres a spliced up video of nancy pelosi tearing the state of union address. It spliced to show at time she was tearing it which was in turn. A whole bunch of Congress People are retreating it say its false. Its so easy to amplify disinformation. They will make it trendy. So the dark patterns make you take away the friction and encourage you to spread disinformation. They encourage you to give up your data. Its very easy to click to say yes, go ahead with your cookies. Its really hard to say no, dont use cookies on me. We could go on and on so lets update some Consumer Protection laws requiring transparency, requiring user interface that works, light patterns if you will. Campaign finance laws obviously need to be updated and i would argue the honest as act is a nobrainer, its appalling it hasnt even got here. Its a bipartisan bill go beyond that. Lets do some know your customer type activities so that we, if we go to all the trouble passing a law, it would be great if we had common standards and a common way of doing it but we need an api that a searchable and sortable and you can use across platforms so intermediaries, Civil Society groups can figure out whats going on. But it doesnt do any good if the name of the group is secure america now or lets have a better tomorrow and you dont who is behind them. You need to be able to pierce that theyll. 501 c~ four is doing advertising and theres a reason that the platforms should be able to do that. Civil rights laws need to be updated. Public knowledge proposed update the public accommodation law, for example. For the online environment which would be really interesting. Then we think that should be a number of people talking about this, a pdf of the internet and this matters very much with commissioner was talking about we need to tax Online Advertising revenue and we need to create a fund like new jersey just funded for Public Interest local journalism, the revenue for journalism is being siphoned into the online environment at the same time as these dark patterns means that content mill, athletic writer called them, potemkin outlets, they did pretend to be independent journalism but our instrument journalism, much less catchy. That they are therefore financial or political ends. They are not there to be independent journalism. They look exact like independent journalism that follows traditional journalistic standards with the byline and corrections and so on. That should be taxed. This other should be fired and that we need a whitespace like pbs has spectrum on the dial. It needs to look different so you see what it is. And then we need to reinvigorate the multistakeholder model for those things for the First Amendment would say the government cant get into details and also because its true, the agency model is slow. It makes things take a long time and Technology Changes all the time. We would suggest im dialing back up to 30 or clarifying really 230 in a few ways so that it would be a bit of a stick hanging over the heads of the industry and you could condition it on company sign up to a code of conduct, threats of consistent standards across platforms on things like Fact Checking, moderation, and also some of these ads, rules and so on. We would also suggest taking a a look at things like Public Knowledge is idea that maybe 230 shouldnt apply to ads. Of course you wouldnt be able to sue anybody on the First Amendment on political ads but just bring that greater scrutiny might produce more Fact Checking, my produce a notice of takedown regime. Some groups are thinking now about should 230 limit liability from civil rights laws. So that might be something you could exempt from 230 as well. The basic idea is to update the offline laws for the online environment, to fund what Ellen Goodman whose work ive been drawing on these calls of the that is being drowned out by the noise of local journalism and reinvigorate the multistakeholder model with an agency guy did to help work out the rules and prevented with purpose of the call for Digital Democracy board but it could easily be the ftc or the fcc as long as there given the capacity to do this kind of work. [applause] thanks, julie and alex. Thanks especially for having to be the last speaker on the last panel right before the cocktails. I wanted to talk about federal legislation to address election day crises. Theres long been threats on election days from natural disasters, including hurricanes and earthquakes, and from newr threats like terrorist attacks. In fact, as many people know, 9 11 9 11 occurred on a primary election day in new york and that led to the state order a redo of that election. The 2016 elections revealed a new class of election threat from foreign interference and social meetup disinformation campaigns. And those threats on election day could take several forms. One would be that actually change votes as the last panel discussed, but there are all kinds of other threats that could occur on or around election days, including hacks of Voter Registration rolls, and some of them could actually falsely indicate that voters have already voted. Some people talk about if you have paper backups for the polling books, thats enough, but i dont how that handles the situation if somebody has made it appear that youve already voted. As others have talked about, the could be a tax on the electrical grid, especially in jurisdictions that are close with the votes are expected to be close. There are all kinds of social media disinformation campaigns that could confuse voters about where to vote and how to vote. I think some people have seen things where people had said that you could vote by text and gave you a number to send your votes into. And that would lead people to think they voted when they had not. There could also be fake news about terrorist attacks that had not occurred. And its interesting in that 2014 before anyone even knew what the Internet Research agency was, and this is outlined in a 2015 New York Times magazine article, the Internet Research agency staged a Fake News Campaign indicating that thered been a chemical explosion in st. Mary parish, louisiana, and it did things like they had fake cnn webpages and videos. Apparently that led to a fair amount of panic they are, and at the time everybody said why would the russians want to do this in this little town in louisiana . And the answer was, i think they were practicing. Also some people say, well, these threats are not that real. But, of course, the federal government together with local and state governments for the last few years has been engaged in all kinds of Contingency Planning to deal with these kinds of threats and others. So i think that indicates people who have some knowledge are very concerned. In 2016 it was russia, but now of course there are many other nationstates that could be involved, including china, north korea and iran. I think people are particularly worried about iran at the fewest assassination of the iranian general latest assassination. To get packed with the russians in particular trying to do, and as many of the intelligence agencies and the Senate Intelligence committee have written, its this longstanding part of the russian active measures efforts that will go back to the beginning of the soviet union where the attempts were made to destabilize western liberal democracy. Its really the internet thats made that much more feasible to pull off, and they are trying to fan the flames of divisive this all over western democracies. That includes not only the 2016 elections, but also brexit, the recent french elections, the catalan catalog separatist movement and many other places around the world. And although congress has i said Broad Authority to regulate a lot about when elections are held and how they work, there are not any federal statutes that really apply to dealing with election day crises and postponing or redoing elections. On the other hand, you wouldve thought since theres long been these other kind of threats like hurricanes and natural disasters and even more recently terrorist threats, that the states would have a well thought of law on had to deal with those things. But actually a lot of states dont have themselves in the statutory law and a lot out there is very inconsistent. That might have made sense when you you are dealing with things mostly like hurricanes where maybe they would occur in geographically isolated areas, local conditions might meet you should a different kinds of responses. But i think with these new kinds of social need information, disinformation threats and the other kinds that i went over, they are going to require a high level of technical sophistication to address. Theyre probably going to occur in more than one place at once, and if theyre inconsistent approaches in different states including none at all, i think that would feed into the very purpose of the state actors behind them of sowing doubt about the the legitimacy of election outcomes. Regardless of whether there are statutes that address these issues, lawsuits will still get filed challenging outcomes based on due process and other claims. If there are not well thought out statutory schemes to do with the i i think those cases will take a long time to resolve, and the result will probably be inconsistent and unsatisfactory. Going back to bush v. Gore i think theres good evidence of that for the Supreme Court at least purported to stop the florida recount based on the lack of clear statutory guidance about how there should be a what the standards for recount should be, and also look at some of the deadlines under federal law, in particular the safe harbor date for states to certify their electors so that congress couldnt challenge them. So to me with all that in mind its clear that theres a need now for federal legislation to address these kinds of election day crises, and doing that would also help with having more consistent responses to these of the threats. I think the issues are very complex, though i wont try to spell out what i think all the answers are, only that really merit discussion. And the problem and a lot of these scenarios wouldnt be the votes were closed. Probably it would be enough votes that had been cast but the question would be, should states certified votes if its clear that these kinds of attacks have led to many people not being able to vote . To go hit and certify electors or victors in elections, again would help to destabilize the legitimacy of those elections and democracies. The with those thoughts in mind, just a few thoughts on the shape legislation could take. I mentioned earlier the 9 11 attacks occurred on a primary election day in new york, new york at the time didnt have a clear statute, but like a lot of states that had Emergency Powers that the governor could invoke. And he ordered a redo of the elections or a postponement. It came under a lot of criticism because he ordered that the elections be redone statewide. He threw out all the votes that have already been cast, even though a lot of people had voted. So after 9 11 new york came out with the statute, new york election law section 308, that provides an interesting framework for addressing all these kinds of threats and it allows the state to provide for an additional day of voting in affected areas only so it wouldnt necessarily be statewide, and they dont throughout throw out the ballots that were already cast. And its triggered by showing it is a direct consequence of natural or human made disasters, fewer than 25 of registered voters actually voted. So i dont know whether 25 is the right figure, but i think that kind of law at the federal level would provide a good framework for dealing with some of these newer cyber threats. And i think the more difficult issue as a set a lot of states have laws that puts the authority in hands of governors to order a redo. And, of course, in these contentious of the clique divisive times i think the hardest issue is who could decide whether they should be a redo or a postponement . When we need to address that is to require a double trigger to have maybe a governor at a state but also someone at the federal level, and for all kinds of reasons maybe it shouldnt be a president who gets to make that call, especially since a lot of the crises that people have looked at is what happens if the president doesnt leave if he loses, so you wouldnt want to make it easy for the highest elected official in the land to say no, theres been some Serious Fraud and we need to do a redo. So maybe they gang of eight in congress would be a place, im not sure. And finally i did that one slide which is just to show the president ial, timeline for the president election for this year. Most of those dates are set by congress except for election day. I think maybe i am what is a good example of that, how something goes wrong and people are squarely to figure out how to address it may be iowa if something on that happened on a president ial election day, again in bush v. Gore they said well, we have this safe harbor date coming. We have to stop the recount. I dont think thats the best way to handle serious questions about the legitimacy of a vote. So there is a lot of room to build in to make sure theres more time. You could even move back election day, make it earlier. It doesnt have to be the first tuesday after the first monday in november. That might raise its own issues, but even with some of these other dates i think there is room to building more time, which is probably something that is useful to do. And happy to take questions later. [applause] thank you all for set of really thought provoking talks. I want to cut in first with an intervention. The ghost you. Which is about limiting photo day collection or use. Thinking about how we intervene at this point of microtargeting. Patrick, if the legislation of voter privacy act the approach use or senator feinstein and other cosponsors used is the focus on the regulated entities and thinking both about a disclosure regime and also i use restrictions are requiring consent. Commissioner weintraub, it sounds like you are thinking kindly back one level bigger by thinking about payment structure or this information. It would be really interesting to tease out these different approaches anything probably starting, commissioner weintraub, you got cut off someone to give you a little more time. Okay. I want to be clear because i have a theory on microtargeting which is the proposal ive been kind of encouraging to the best of my ability, the platforms to take on other own but then theres a different approach that is in the article itself so which one do you want . Go for both. So with our servicing ideas. All right. It all of this we have to be really careful. I have to be really careful in terms of what the government can do without intruding on peoples First Amendment writes to express themselves. So i have on the one hand, proposed the platforms when you look at political advertising that they should limit the microtargeting in order to allow for more counter speech. Thats the classic First Amendment response to, you dont like what one person essay, you can raise your own arguments in opposition here that platforms have come up with very different approaches to this. So twitter said, were just getting out of the political advertising game altogether, which struck me as a little bit draconian and perhaps cutting off a productive venue for people to engage in political speech and, in fact, they thatt its not so easy to define what is political advertising and then got some complaints about that. Google has come up with the closest approach to the one that i recommended and, therefore, i think its the best approach of the three major platforms in that they said they are not going to microtarget below zip code range. Able to buy a age, gender, but in terms of numbers they will not go below zip code range. I know they stunt complaints with gun over the years from mailers that go out within zip codes that if you blanket zip code you are going to find somebody that does not agree with the perspective in the political information you are getting and they will be the ones who will be motivated to engage in counter speech or complain to the spc sometimes. Facebook is really pretty much taken a hands off approach on this and said well, we studied at all very carefully and decided that were going to do very little. That was disappointing, and im hoping that with some public pressure they might change their mind by the havent seen much indication that theyre inclined to do so. In the symposium article, and i really did give it short shrift because i was trying to cover too much, but one of the problems with looking at section 230 section 230 other Communications Act is that even if one were to cut it back in some way and open the platforms of to some form of liability, the question is who would have standing to even sue under those circumstances . Because if what youre talking about is a harm to democracy, and everybody in the country is suffering from the same harm, who has individualized harm to sue . By analogy, the taxpayers standing doctrine may be nobody. Thats kind of a problem, which is why we decide to look at it in terms of adjusting the costs involved. Instead it in getting this freebie of all of your personal data that most people do not consciously agree, you get these long terms of service and ask you to click on it and, like, whoever reads of that stuff, right they are in effect contracts of adhesion. I know there are various movements out there to delete facebook or other platforms, but most people feel like the really need to be online in order to engage with their communities. If the price of being a light is just to allow them to suck all your data out, i dont know that people feel they have a meaningful choice about that or that theyre even as a said cautiously giving this up the way these apps interact with each other. Its really kind of terrifying, the amount of information by cell phone knows about me. This is very different. My television doesnt know this about me, and my mail doesnt know this about me and my telephone, my landline doesnt know this about me my cell phone, oh, my gosh, it know so much about me. Some people in the audience are going to debate you on that, but continue. Well, yeah, with the internet of things, you mean . Thats scary, too. We bought a lower graded wash machine because it really did want my washing machine to be connected to the internet. It doesnt have to text me when the laundry is done. [laughing] sorry. The point is by making them pay for this, i think this really is, its also sort of the Consumer Protection approach because i believe this data belongs to each of us and if they want it they should have to pay for it and that will really adjust the economics of how they manage their advertising, and perhaps will introduce some more friction into the system. You are really putting back to a different macrolevel. You are not sounded like an sec commissioner right now. Big picture, big picture. When they get wind of this idea that going to wish that i i was met with suggesting amending section 230. You heard it here first. Patrick, talk love it more about the case of microtargeting. I wish i could reconcile the approaches in real time. Its fascinating what the commission is proposing. But to get more context to what we set out to do i will read you my favorite Cambridge Analytica quote come and assist cambridge described the psychological operations as changing peoples minds not through persuasion but through Information Dominance a set of techniques that include rumored disinformation and fake news. Obviously on his face that kind of behavior is fairly discussing and pernicious, but without being able to address the content directly, the question for us been to get okay, how do we mitigate the effectiveness of these types of techniques without being able to address the content . As i pointed out in the study its very clear the more data they have on each individual person, the more effective messaging becomes. We tried to cut the link between the data that is available to political entities so they can. The sophisticated psychological targeting efforts done without an individual voters knowledge and allow them to return to a period of political advertising which is probably what we are all familiar from when we were younger, just sending legal ads to an entire zip code, through broadcast in some fashion. Its really addressing microtargeting their different tactic. Id use the term sometimes and nobody seems to like it except me, but its taking smart weapons of information era and turning them back into dumb weapons by cutting the targeting capability you cut the data, still send up as much nonsense as you want to whomever you want, its just less effective when you dont know how to be pursued by each individual person. Its getting at the same issue i think in terms of pernicious effects of microtargeting, et cetera, but we found in terms of the Legal Environment and the obvious issues that the date it was really the thing that we had the best chance to go after. Just to add one more thought, the part of what were trying to do in going after the sacking of the data rather than the messaging is overlooking contact, not content. Thats really a critically important distinction for First Amendment purposes. I was just going to add to patricks point, and we talked about this, that in europe theres a discussion right now about whether collecting political and philosophical data about individual and by collecting they mean inferring as well, is compliant with gdpr or complaint with the underlying eu privacy law because it is considered Sensitive Data. Spanish data commissioner asked for a halt in the last election. The uk information commissioner officer has talked about that. What they do with that kind of Sensitive Data is you have to opt in, not opt out. So you can imagine if a company have to say is it okay if i infer your political beliefs based on your magazine subscriptions because i would like to microtarget you with nonsense, you might not see yes or at least you would be more aware of what is coming at you. I wonder if, i think its an interesting question about limiting that at the collection side, if thats more First Amendment friendly than other ideas. I mean, i dont think the commissioners ideas are less First Amendment friendly but some of the ideas i have heard. If i could add one more point. The other thing we wanted to do with the deal given its an uncertain First Amendment and private was we want to set up if the was a challenge and a was was bouncing of constitutional interest, which i think it would be we thought it was advantageous to have not the government to include the use of a class of data but instead having individual conflict between the voter in the role of popular sovereignty and our democracy, visavis the candidate who is seeking that felt for office, and i think personal it would be an odd precedent for the court to find that the candidate seeking about has more of a right to your personal information than you do. We want to rely on that paradigm because we thought it was an advantageous comparison of rights. Another paradigm i saw in the introduction of villas clear distinction between manipulation and persuasion. I think back to the panel earlier in the day, julie asked what would you do if you could wave a magic wand and would make underscore the value and Getting Better clarity about that distinction. Id love for it if you want to engage on how we begin Getting Better clarity in that distinction and the utility of doing so. You all discussed to 30 so i will defer to you. The one thing i want to throw out is that i dont want us to only focus on ads because the ads are pernicious and port but theres a whole as we learned about theres a whole world of these third parties out there and now instead of paying the platform to run ads, you can also pay a third party to gin up bots or trolls or influencers to whats called organic which means not paid messages. What they can do is just fled the information space. This is a trick that authoritarian governments use all the time so youre not really sure that other people agree with you. Its not that theyre meant theyre trying to convince you. They are just try to confuse you and drown out again drown out the sender with a lot of noise. I think we need to Pay Attention to that. One other one of the thing thats happening as the platforms develop policies is theres a movement, and also as also become aware theres a movement in tactic. I think that organic pushing out of disinformation is one we have to become really aware of, not only as it microtarget a manipulates us but as it corrupts the overall information in the ecosystem. I would love to go after the use of bots in general. I mean, yes, the court has said corporations First Amendment rights but by identity say that robots have First Amendment rights. I believe was talking to nick about this in the break about whether the companies have the technology in order to detect bots to take them all down. And i would really love to see legislation around that, just going after the use of bots. Patrick is eagerly jumping in on that with. Im trying not to do it eagerly but in a restrained manner. So when i was with senator feinsteins introduce legislation called the bots disclosure an account of the essentially the premise is social media platforms have to have policies in place that require their users to disclose the presence of a bot associate with their profile. If you operate a bot on twitter feed that says every time the New York Times puts a story out you respond automatically and say fake news. You have to disclose to the Public Entity operating that account is a bot and their somma helpful data that suggests humans once here where theyre conversing with the bot they find the information much less persuasive. So i think disclosure in the field is particularly helpful and then you do with the First Amendment issues, you are addressing of information is delivered through a computational artificial means as opposed to anything associate with what the bot is saying. I completely agree, back to the National Security felt was you i feel most comfortable, theres a lot of great reporting on the use of bots around important events in the 2016 elections. If you look at back to the night of the first debate Something Like 25 of all the bot traffic online or excuse become all the conversation online about the debate was done by bots. Its a significant issue. I think theres a real policy rationale for disclosure regime. That may not be as far as some folks want to go but in the same First Amendment context that thats one way that could be done. That would be incredibly helpful and, of course, disclosure something the Supreme Court is pretty solidly in favor of, so that should work. I would echo that and then add on top of that that we need a disclosure regime for things like deep fakes. Somebody feels like to have First Amendment right to alter the video, lisa thought to have some kind of disclaimer on it that says this is manipulated video. Back to your question about manipulation versus persuasion. I had looked at some approaches the section 230. I dont think disclosure has been that effective. I dont think most people care when you learn who is behind something. I looked at the idea of maybe some kind of right of reply come out of a cable and broadcast background, you know, theres long been things like the fairness doctrine and equal time. The courts on First Amendment grounds, they expressed some skepticism but i think under the First Amendment there is some room to develop Something Like that. And especially in terms of section 230 which is such a broad grant of immunity, if you could condition the immunity that it provides on some kind of right of reply to people who are affected, maybe that would be more effective than some kind of disclosure obligation. The only thing i wanted to add about the manipulation is, i maybe disagree with you a little bit and did you think obviously the tactics of the bad actors are to hide their identity, their manipulation and launder it. Otherwise, they wouldnt bother hiding behind a c4. They wouldnt bother pretending that its a news outlet. They wouldnt bother making it look like a realistic when it is a fake. It seems to me if you can really label it in a way that it is intuitively understood. To me, right now its an antidisclosure, so you make the totally hyper partisan news outlet that has no Fact Checking and the peace is an opinion piece, even the the you dont realize, you make it look exactly same as a traditional page use the word traditional. Independent journalistic standards outlet that has more transparency, a code of standards. Content mail is arbitrage and etruscan stealing the trust that you build up in the supply chain of news that comes out and independent journalism. Its doing that purposefully. I had to think if you could undo that dark matter and crate of a light gathered where its it intuitively understood this is a different kind of communication whether it is a bot, and outlet, a deep fake and so on, i think that would be it has to be part of it has to be part of the manipulation and then doing has to on the margins. It needs to be disclosure with meeting which is your commentary even on honest ads act. Its one of the disclosure but it needs to be an accessible database, and a disclosure that way see and use. Not to halt the mic but theres realtime disclosure and then there is afteraction reports. I think those are really different purposes. The realtime disclosure can be so collocated, cant be click here to see more. Has to be intuitive. It has to have before your attempted to click and pass it onto it has to an individual can make use of. After action come to suck with a black box and a plane goes down so we can figure out what happened, they could be an analysis that handed over to the faa so they can tweak rules of somebody can sit and look after defining what happened. We need afteraction reports that say that a sortable and searchable and that Civil Society groups or agencies can make use of. The ad database is Something Like that. Or moderation log or what was taken down so that you can appeal. These kinds of after actions things are really important but its not disclose about disinformation. This would understand understand what happened and figure out penalties and help society understand who is trying to do what. Im going to ask one more question and then turned over to the audience, so please get thinking of your own questions. As a final point i wanted to post back to his ideal reinvigorating the multistakeholder process and where the responsibility lies with the platforms. When i spent time think about this, you quickly run up into a challenge of how unified you want the platforms to be in their responses . There are areas, some areas with is pretty strong consensus about what misinformation is, antivaxxers information, and so being an industrywide response compositing. Other areas was much more contested or even thinking to what of the best practices for the platforms, reasonable mind to disagree. There could be real for experimentation as platforms take different brats more aggressive approaches towards taking an content. As you think about what an agency would do or even absent an agency putting pressure on platforms to have kind of more thoughtful laid out approaches, how do you reconcile those and think about that . Anybody is welcome to china and. I would pick up on earlier remark that you should focus as much as possible on practices and not content. If you have Fact Checking rules they should apply to everyone. They should be certified Fact Checkers, things like that. Who do you rely on for definition of what privacy group, which kinds of disclosures are you going to have, things like that. I think you focus on practice as much as you possibly can and consistency of rules, and do they apply to. Should you accept candidates or shouldnt you . Potentially the four versus the ceiling . I think you want to have actresses thats the point of having the multistakeholder model is that instead of an agency figure out how it should work seven years after that pathology has changed, that you have the actual players in the room and its a little bit more dynamic and they can make it work for the technology at the moment and then they can adjust the more dynamically. Theres art in that more than science, but thats the idea that you focus, that the agency would set some kind of floor and that they would come up with some practical way to implement it and then they would be audited against the practices, not the content. I i guess i get nervous about uniformity especially thinking Technology Changes so quickly. If you dont allow anyone to experiment and you get stuck with past solutions that that would probably be ideal. Even though i think this book should take more responsibility, i am sympathetic to the point that there really shouldnt be the thought police. I dont think they are very good at it. Thats why to me i i was focusg more on the idea of having poor speech like through a right of reply, rather than restricting speech. If i could just make one comment on the last piece that you made. [inaudible] i would be fine with a policy that used zip code or any other measure that applies to a broad class of people. Its not as much about whether or not a candidate should be able to put the message out on twitter or whether someone has to be the fact checker or the thought police. Its can we use these highly sophisticated targeting techniques the social media platforms have developed to send this adds to individual people located throughout the country. I wish there was more conversation about the data and less about you we control whats in speech, she would be checking whether its true or not and i think thats left up. Theres a reason for that. Thats talent out of my proposal, but the reason is that cuts into the profits, right . The platforms have a real financial incentive to say we cant do any of this because we would be the Speech Police, we would be content, well be restricting content. We would be violating the First Amendment if anybody try to legislate that. Although at the same time they sort of have said kind of as little bit of regulation although i think that might be a little bit disingenuous because they know its not going to happen so that takes some, well, as there are full, we do want to do this, we waiting for some guides from the government which for writing for a variety of reasons is unlikely to happen. But yes, i think it can be. We can talk about conduct rather than content and the mckinney may be the Speech Police but it is going to cut into the profits and thats why they like to frame it that way. Were going to pass it to audience questions. Please, take it away. Ill be really quick. I was wondering if i can ask the panel, anybody could answer on it, a little elaboration on the idea of disclosure on this blackbox that karen refer to. I guess part of my perspective is as a researcher when i got into a lot of tech companies, the answer i often get when i will show them a result, ive done this with google and facebook, algorithmic result, they dont have answers even for how a the system works. So even with the full access to the data they will say these are dynamic system. Theres a lot of av testing going on. Theres a lot of systems that cannot be, you dialed back. The blackbox is not really possible to construct to replace after the fact that i get one and two, response this is a i cant do that for you. I can give you an explanation so theres a bucket this is explainability is really hard. The other bucket of response is sort of a trade secret responsiveness as i cant actually keep talking to you about this because youre getting close to things that are really dont want to talk about and cant talk about for various reasons. As a researcher i encounter these moments and im wondering if theres a regulatory response to one of the moment which is to say, one thought is if you cant explain something, maybe it shouldnt having the public role. Its like if you dont build it to begin with answer. And maybe thats true with voting as well. The other answer is to say is there sort of the public exemption im not a lawyer but a public exemption to trade secret protection when that technology is having a Public Function . You can build it, but if it misbehaves, you have to tell us about it because it had a a puc function to it. That was a statement really, not a question. I apologize. I just wonder if you could walk me through some of the regulatory thinking on is that crazy, is her a third option . Thats great. I love that question. I dont really have an answer for it but as far as how the algorithm works, i draw analogies can regulatory analogies. In the fda context is has to be a safe product and empty it is a very Flawed Agency but it has to be a safe product theoretically has to do what it says is going to do. Again, conduct kind of measures. What i think about is, after the 2016 election the only recently found out as much as we get is that the Senate Intelligence committee demanded data from the platforms and then we found out the extent to which africanamericans were targeted. We would not have known that otherwise. How many rallies were organized by russia. We would not have known that otherwise. Take groups. So there has to be that kind of transparency. This is our elections are happening on private platforms and we need to have that kind of so i think thats the kind of dynamic that, that type of conversation about the algorithm that you would have as part of a multistakeholder whatever, code of conduct forming. I think a question is an interesting one at a dive and support but im sure somebody in the room has thought about it. Im going to move on to julies question. This is almost maybe a different way of asking the same question that mike just asked, but i was trying to sort your comments into buckets based on kind of your point of intervention, right . So this one bucket for sort of post crisis response. My question is not really about that. There is one bucket for transparency, which might become post crisis response. There is one bucket for targeting, right, which is how particular voters are singled out to receive messaging. And there was one bucket for sort of taxing and funding. Theres another bucket that none of you really spoke to, which, except just now mike brought it up, which has to do with the algorithm and when it has come up you use an interesting word, which is organic, right . And a good take on board the ramifications of that word, right, its like its a tree, right . Its natural, and yet it ties into a lot of what the first panel talked about as like the performative dimension of information consumption, right, people circling to the network. But to use the word organic, that process kind of makes the question of how we have all been conditioned to hit like and ha ha and hate and you just kind of reflexively press buttons. I watch my teenager. He can presently 100 times a minute, and there theres a fay kind of embedded in that characterization of that is organic, right . And i suspect if you run into the trade secrecy problem, right, but the companies that are sort of leveraging the data harvesting and revenuegenerating potential of organic spread no rather a lot about the patterns of organic spread. And presumably could interrupt it, right . Just like you could choke voters in zip codes, you could interrupt Viable Network spread. Because if youre constantly cities are tweaking your algorithm to spread that stuff faster, presumably you could o the opposite way. And yet i never hear anybody talking about that. I am just waiting for the content moderation debate which is all sort of whackamole and stuff that already up there to talk about this other thing. I i was when if i could just prompt you to talk about that as the site of regulatory intervention . Which might be a a with talk of what properties the government would need to be able to do that, because i think theres an arguable gap there but i would like to speak to it, if you would. I feel like we should have dinner and talk it all through again. So one of the things i was trying to talk about was the lack of friction and when i talk about the dark matter and light patterns, how manipulative it is in realtime and how easy it is to share. So thats think one. Thing to two is i think there s more to the information pollution that either ads or the algorithm. So i think, again i use organic with quote. Organic means not pay. Before the unit we talked about paid media and earned media which is when you like and aperture event and it gets on the evening news. These networks of either true believers or not networks or interest enters you can either pay or that are real influencers, and there are ways to manipulate. Even real people for influencers have huge following and act Like Army Generals and this and something out and everybody follows suit. I think is more than just the algorithm thats going on. I guess i have so many things that i started to talk about it i did get to all the things we think of in the code of conduct but i would just go some of them. One would be Something Like consistent political at definition for rules including candidates and political figures, using accredited Fact Checkers, committee to a time review. The way it works now is a journalist likes it or somebody else and says this is true or not, this posting . And he goes to the fact checker. They dont fact check it if its an opinion. They dont fact check if the claims to be satire. And then they are very few Fact Checkers out there in the world and then they get back in the platform decides what to do with it. Doing something about that, consistent standard for what is fact check, what is clarified and pitiless. You dont tell the platforms will penalty they should have but it should be clarified. Thats think one. Thing two is best practices. Senator warner idea what you love, theyre conducting research on users. This is what you talk about, by tracking and testing, so using best practices on conducting research on humans, deference to civil rights and definition of hate groups, promoting voter information, redirecting the way they do for isis, for other things, and then this gets of what youre talking about, de prioritizing engagement in the design of recommendation algorithms and providing users options for tailoring the recommendation algorithms. Harold feld has talk about ideas of altering the algorithm so that its not entirely for engagement. So there are ways to think about that. Hes got some interesting ideas. An appeals process and white listing of news, and other scientific information. Those were some of the ideas we were talking about in terms of our code of conduct. Some of it moderation some of it little broader. This is just a paper tiger for people to throw things at the algorithm one i think is probably the trickiest. Ill just throw in tonight President Trump holds a Campaign Rally in phoenix, arizona. Watch live at 9 p. M. Eastern on cspan online a cspan. Org or listen live on the free cspan radio app. This week were featuring booktv programs showcasing whats available every weekend on cspan2. Tonight starting at 8 p. M. Former secretary of state Condoleezza Rice talks about u. S. China relations in the 21st century. Enjoy booktv thi

© 2024 Vimarsana

comparemela.com © 2020. All Rights Reserved.