comparemela.com

Im the chair of the federal Election Commission and i am delighted to welcome you to to digital disinformation and the threat to democracy, information and integrity in the 2020 elections. A symposium i am cosponsoring. T disinformation has become a newly potent threat in the y pot digital era. It hasas weaponized our most cherished freedoms to sow discord and undermine democracies worldwide. Here in the u. S. We saw russia d deploy disinformation as one of its tools when in the words of the Mueller Report, the russiand government interfered in the president ial election in l sweeping and systemic fashion. Alarms are being sounded throughout americas e b intelligence community,einoughot halls of congress,y, in the departments of Homeland Security and justice, and by many of the experts who have generously taken the time to speak to you n today. They warnby the these attacks ongoing, they are of accelerat and they areto s unlikely to be waged by russia alone going unl forward. In may when theik be wa committ House Oversight subcommittee onn National Security held a hearing on securing americas election infrastructure and political discourse, i was pleased to be n called upon tod testify. I speak then about how the infrastructure of our elections the physical electoralis apparatus run by st and local governments, but more fundamentally, the faith that american citizens have in our elections. And i spoke about how that faii has beent under malicious attac from our foreign foes through disinformationn campaigns. When the russian government spent millions of dollars to interfere in our election, that directly implicated the ban on f foreign spending in our elections that is part of the federal Election Campaign act administered by the fec. When they hid the source of tha spending, they violated the core disclosure mission of this agency. They e problem is so much larger than that and cannot be s meaningfully addressed if each of us views it through our own narrow lens. Ssed i disinformation is a fundamental assault on our democracy, on the united character of our United States. On disinformation exacerbates our a divisions and distorts the truth that binds our common experience. The russian chess grand master gary ca puts it this ryway, k quote, itint of modern propaganda isnt only oh misinform or push an gas agenda, it is to exhaust your Critical Thinking, to annihilate truth. I well, today we will prove that our Critical Thinking is not nearly exhausted and that truth has not nearly been annihilated. I am so gratify that this event has drawn to the fec one of the largest gatherings that i have seen in my time as a commissioner. We haveve overflow space. Standing room only. It is really terrific to see you all here. We have on hand current and former office holders, scholars journalists and advocates. Joined byyweurnalo bee luminaries such as senator mark warner of west virginia, former Homeland Security secretary of Michael Chertoff. A number of members from the Tech Community are in attendance including people from google, microsoft, facebook and twitterm some of them are on panels but all will have opportunities to ask and answer questions and make comments. I thank you fores to answ comin. My cosponsors and i seek to launch a conversation. We have brought to this room many people with many differentk perspectives. Others. This ny not everyone could make it but this is a good beginning. We have a wealth of expertise in this room and i intend to do fa more listening than talk. Nn someing. Of us here like me swf oath to protect and defend the constitution from enemies , foreign and domestic, and so it is fitting that were gathered on Constitution Day 232 years after thatga constitution was ys ratified, to Work Together, to defend the system of government constitution, which is, indeed, under attack. E the task before us is large and. Challenging. Wont solve all of our problems today, but we definitely wont solve them by working in iceation. We have to Work Together and we have to see the big picture. I am grateful to have as my partners in this efforts the m gra cosponsors of this event, pan america and stanfordtehave universitys Global Digital policy incubator. Pan america has been a strong and consistent defender of First Amendment values and a lead in spotlighting the risks to free speech and open discourse posedt by disinformation. Si want tosten thankt them for th focus, vision and energy that e their ceo and director in particular have brought to this project. Stanfordwector universitys gl policy incubator is another leader and innovator bringing oa governments, tech innov companies and civilat society t developte policies that advancee humans rights and Democratic Values. Thank you to the executive to t director of gdpi for all of your help and insights. I wont take the time to fully n expand ond all of their many accomplishments, but one does o worth bear noting. Working together they were instrumental in the passage of n the first ever u. N. Tal resolut on Internet Freedom which laid out the foundation that human fd rights must bea protected onlino just aste they are offline. Are i have to get a little housekeeping out of the way. My administrative staff asked my to point out to you the two exits to this room, right over i here and way in the back. Im doing my Flight Attendant routine. In case of a fire or other emergency, please exit in an orderly fashion and head to the staircase in between the two th doors. I especially hope theyre not do needed today becausers i want t symposium to bring light not heat to ourho discourse on misinformation. In that spirit, im very please do bring to you one of the countrys leading lights on the fight of disinformation, senato mike warner. Isinfo from his seat at the vice chairman, senator warren has been at the forefront of the inl committees ongoing counterintelligence investigation into russias 201 election interference. And in a town increasingly torn by bitter partisanship, senator warner has worked hard with the chair of the committee, senator richard burr, to keep the committees efforts bipartisan. He is recognized as one of vongress preeminent voices in theic ongoing debate surroundin social media and user privacy. Gi one of his bills would write new and tough data breach standards. If its digital, senator warrener warner is all over it. To all ighted and excited have him here today. H please join me in giving a very warm welcome to senator mark warner as he delivers our opening keynote. Much, ellen. Very thank you for those kind s. Comments. I will also agree hopefully we will not be rushing for the shi doors. But if you happenf to see postig on your device, rush for the doors, may you question who is behind that. So i want to try to get this right. Im actually going tois put on glasses today. May i again thank the chairwoman, suzanne and ambassador donahue for hosting ahue fo putting r together this very thoughtful and timely event. Actually, ive never done this with glasses so im going to trl to do it without glasses here. Michael, youve probably done it with glasses. I also want to thank, again, toh pan america, which has been has a long standing defender of Free Expression and public discourse. Unfortunately, like many of our institutions today, that discourse is under assault. Anda its under assault from dark ast money, russian trolls, and emiss extremists who exploit social m. Media. In each of these cases and of ts others, we see openness, see diversity and accountability of, the public sphere being undermined. A the internet optimism of the 1990s and early 2000s obscured this trend for many of us. There was a bipartisan consensui that the internet was inherently democratizing and liberating. We based major portions of major policies on this underlying optimistic derlyi assumption. From section 230 to our stance on china and the revolutions. And as a result, we now face serious policy challenges. Revol today, and i say this somewhat reluctantly, i would think we need to rethink that optimistic. We need to come up with a new set of foreign and domestic tech policies that are based in less optimistic or at least more realistic notion of technology and the internet. Now, to be sure, this is be somewhat surprising place for me to come from. For many years, as ellens mentioned, ive kind of been known as thehe tech guy in congress, and like many policy i makers, i shared the consensus n the membertechnologies and companies that built them were largely positive forces. The c now we have seen how the misuse of Technology Threatens our r democratic systems, our economy and increasingly our National Security. Rity. Russias attack on our democracy awakened a lot of people to this truth. Ias attack on our democracy awakened a lot of people to this truth. We know the United States faces a serious threat we know that the United States faces serious threats in the cyber domain from both state and nonstate actors. Not to mention the threat of misinformation and disinformation efforts. By russia and increasingly by those who have copied their playbook. As a result of this recognition, we are finally beginning to have some overdue conversations on privacy, data transparency and other Critical Issues related to social media. But we also must be we also must confront the way that domestic actors have also exploited these technologies. More broadly, our position as a Global Leader on Technology Issues has been weakened by the retreat of the United States on the global stage. As well as by congress unwillingness or inability to formulate smart policy responses to the challenges we face. And, frankly, i worry that this administrations haphazard approach to trade may end up exporting and internationalizing some of our worst tech policies. And while im encouraged that the governments around the world, including the eu, have begun to fill this vacuum, the need for u. S. Leadership, pragmatic, techsavvy policy has never been greater. As vice chairman of the Senate Intelligence committee, i spent the better part of the last 2 1 2 years on the only bipartisan investigation into russias attack on our democracy. The truth is the United States was caught flatfooted in 2016 and our social Media Companies failed to anticipate how their platforms could be manipulated and misused by russian operatives. Frankly, we should have seen it coming. For one, many of the technologies relied upon by the russian i. R. A. Were not new. The audience building, the exploitation of recommendation algorithms. These and other techniques were longstanding tactics of online fraudsteres. And we even saw an Early Warning sign of the context of this context in something called gamergate back in 2014. Youll recall that gamergate was a concocted and connected Harassment Campaign waged against women in the video game industry. It foreshadowed how bad actors could use a variety of Online Platforms to spread conspiracy theories and weaponize Facebook Groups and other seemingly innocuous you know what i mean. Online communities. We also missed the warning signs in the international context. Over the last two decades adversary nations like russia have developed a radically different conception of information security. One that spans cyber warfare and Information Operations. Into a single area. I feel that weve entered into a new era of nation state conflict. One in which nations project strength less through traditional military hardware and more through cyber and information warfare. But our adversaries arent necessarily using highly sophisticated tools. They dont need to. They are attacking us opportunistically using phishing techniques and rattling unlocked doors. In many ways we brought this upon ourselves. We live in a society that is becoming more and more dependent on products and networks. Yet the level of security and integrity we accept in commercial technology in commercial Technology Products is shockingly low. I think about the fact that we still do not have basic Security Standards for iotconnected devices. And as a society, we continue to have entirely too much trust in technologies our adversaries have begun to exploit. Now, while some in the private sector have begun to grapple with thee challenges, many remain resistant to the changes and regulations needed. Lets face it, this may come as a shock, congress doesnt have its act together either, and its not enough to simply improve the security of our own infrastructure Computer Systems and data. We must work in a coordinated way to deal with the adversaries and bad actors who use technologies to attack our democratic institutions. Internationally, we need to develop new rules and norms for the use of cyber and information opportunities. We also need to better enforce existing norms. But norms on traditional Cyber Attacks are not alone enough. We also need to bring Information Operations into the debate. In addition, we thede to build International Support for rules that address the internets potential for censorship and repression. We need to present our own alternatives that explicitly embrace a free and open internet and we need that responsibility to extend not only to government but to the private sector as well. The truth is western companies who help authoritarian regimes build censored apps or Walled Garden versions of the internet are just as big a threat to a free and open internet as government actors. We need to realize that the status quo just isnt working. For over two decades the United States has maintained and promoted a completely handsoff approach, and today the Large Technology platforms are in effect the only major part of our economy without a specific sector regulator. For years we told the world that any tweaks around the edge would undermine innovation or create a slippery slope towards a dystopian system. Instead, our failure to act, the opposite has happened. Many countries have gravitated towards that dystopian chinese model. In part because we have not offered pragmatic valuesbased alternatives. And weve seen how laws originally intended to promote Good Behavior like section 230, which was meant to incentivize Effective Modern moderation are used by large platforms as a shield to do nothing. Just last year the americans americans were defrauded to the tune of 360 million. By identity thieves posing as military service members. Now, as the New York Times reported, these arent sophisticated state actors using fancy marketing tools. They are pretty basic scammers in internet cafes in west africa. And the truth is Facebook Facebook faces no meaningful pressure to do anything about it. Neither do defrauded americans nor the impersonated service members. Section 230 was borne out of and rests on the incorrection assumption that sites would pursue robust moderation. Because they felt or at least the assumption was that users would flock to other providers if their sites became dangerous or abusive places. Obviously that restraint has not worked. This is just one example of the internet government regime weve convinced yourselves and tried to convince the rest of the world is working just fine. Obviously that is not the case, at least in my mind. Instead of dealing with the misuse of their platforms, these Large Companies have externalized the responsibility of identifying harmful and illegal activity to journalists, academics and independent activists. And rather than promoting pragmatic rules of the road for the digital economy, the United States continues to promote a la laissez faire approach to technology. Whether thats refusing to sign the christchurch call or new platform safe harbors in trade agreements. Last summer i put forward a white paper helped in many ways let me acknowledge a great person on my staff, rafi martina, as many of you know. I recognize they intersect with a number of the issues that will be discussed today. So i hope this will get the conversation started. We can start with Greater Transparency. I think folks have the right to know if the information theyre receiving is coming from a human being or a bot. I also put forward legislation called the honest ads act that would require Greater Transparency and disclosure for online political ads. Companies should also have a duty to identify inauthentic accounts. If someone says theyre mark from alexandria but theyre actually boris from st. Petersburg, i think folks have a right to know that. And if a large Facebook Group claims to be about texas pride but its administrators are constantly logging in from moldova and belarus from those ip addresses, again, i think the users following that group should know that information as well. We also, as ellen has laid out, and i have a series of additional bills beyond the ones ive already put forward, need to put in place some consequence for social media platforms that continue to propagate truly defamatory content. We saw facebook caught flat footed in the face of rusementary audio and video manipulation of speaker pelosi. This does not bode well for how social media is going to deal with more sophisticated manipulation techniques as we see the era of deep fakes come upon us. Platforms should be granting greater access to academics and other analysts studying trends like disinformation. Instead weve seen platforms work to close down efforts by journalists and academics to track misuse of their product. We also discussed a number of other ideas in the white paper around privacy, price transparency, data portability, inoperability. It is my hope that the companies will collaborate and be part of the solution. But one thing is clear, wild wild west days of social media are coming to an end. To conclude, obviously there is a lot of work to be done. Democracies have been at the forefront of technological innovation. Now, in order for democracies to continue that leadership and preserve our own social discourse, we need to question some of the outdated assumptions about these technologies and put forward policies that are truly consistent with our values. Ellen, i applaud you and your partners for putting together this forum today. I think it will be an important part of this debate. I actually believe there still remains the opportunity to make this an area where there is no such thing as a republican or democratic solution, it is more of a question of future versus past, and getting this future right to make sure that we not only preserve our democracy in 2020 but we dont allow thee platforms to become the advocates of hate and abuse has to be most has to be one of the most important issues we face today. So i appreciate the opportunity to be here. Be happy to take some questions. Thank you all very much. Questions, comments, suggestions, criticisms . Yes, hello. We have mics and a question here. Thank you for the talk. I work with academics who study twitter data. Twitter, as you rightly recognize, has made it much harder for academics to get access to the data they need, to build the models that could catch in real time the start of Something Like Kamala Harris destroyed. Can you help us restore that access by creating safe places for academics to work on this data . That is something and, again, this is something weve been engaged with all of the platforms. Sometimes weve we get a positive response but then when the details come out, and weve seen that, for example, on in the campaign ad space where there was a requirement that platforms voluntarily agreed to Place Campaign ads in an area where they could be looked at. Its hard to find. Its difficult to navigate. In the short term i think the most important thing we can do is to continue to try to work with you to bring pressure on to the on to the platforms to be more forthcoming. Its a hard area to figure out how to legislate that access so im hoping persuasion can be used, but ultimately if not, we have looked at legislative solutions of this as well. So id hope you get with my staff and see if we can find better ways to Work Together. Hello, senator. My name is katherine fitzpatrick. I wanted to ask you, how would you either redact or remove section 230 . Would you get into strict interpretation like incitement in violence, would you have it broader . How would you go about fixing that . Remember, section 230 i think sometimes people confuse section 230 with restrictions on free speech. I mean, free speech is protected by the First Amendment. Section 230 is in a sense then kind of an adoption. I was an old telecomguy of policies that basically said lets consider these platforms being a bunch of connections of dumb pipe sos they have no responsibility at all for the content. That might have been the right answer in the late 90s. In 2019 when 65 of americans get some or all of their news from the platforms, i think it needs to be reexamined. So and we have seen, you know, the platforms who have said there is, you know, section 230 is in violation and weve already seen areas where we have legislated exemptions, prostitution, sex trafficking, child pornography, bomb making, and im not at the point as a matter of fact, this is one of the things i would hope to that come out of these discussions today. Section 230, how we might rethink it. I mean, if it was really a factor towards towards promoting moderation or moving people towards new platforms, i would argue one of the one of the tools that were working on right now is a data portability and inoperability legislation. So if you got tired of platform x you could easily move to platform y. Kind of borrowing a tool from some of us old enough in the room who used to remember it used to be really hard to move from one Telephone Company to another until we had number portability, so the same idea around data portability. Let me say, to my mind, and this is a bit more controversial. Dont have everybody a huge reaction against this. I think content and weve seen already australia and the uk move in this area. Is at least indirectly related to identity validation. And i make this if were worried about the questions of abuse and misuse of information, identity validation is another approach. Now in america that might not have as Much Negative consequences if you are a political organizer in egypt, identity validation is a huge issue. So i actually see 230 and identity as tied. And there may be if we could figure out what the right balance might be, we might need more of one and less of the other, but that is, i think, really where part of this debate ought to head. And i would welcome thoughtful approaches on how we could sort that through. I do think kind of as a on the technical basis, the data portability and inoperability would be a great addition as well because that would go back to the original premise we had of, if you didnt like the kind of of information you were receiving on platform a you could move to platform b. You cant do that today. One of the things i would also add i dont want to get dont want to go too crazy detail wise on this already, but if were going to do data portability and inoperability, another legislation that i have that i find it remarkable that the platforms are saying they cant do this, which is total baloney. We ought to also know what data is being collected on us and what the value is. The idea they couldnt provide that is bogus. So all of thee things fit together. Okay. He has a little vote button. So i really appreciate that you called out the questions having to do with the private sector here. Of course were at the Election Commission, so were all very interested in election misinformation security, but just in the news in the last few weeks were seeing private sector security incidents with misinformation. I wonder if you have any thoughts about how a private sector Security Response might then springboard and enable a better response to Election Security when it comes to misinformation. Great question. Well, first of all, how in the heck did we come to 2019 and think that the protection of our Election Security should be a partisan issue . I mean, would we ever think that protection of our electric grid or our Financial System should be a partisan issue . It is it is crazy. And i would argue that if i can get some of the every piece of legislation ive got in this area is bipartisan. So some of this is kind of like and im going to answer im going to get this piece in and then im going to answer more specifically. So, you know, lets go ahead and put a bill on the floor that makes clear because there may be some people downtown that dont understand this, if a Foreign Government tries to intervene in a foreign election, the obligation is not to say thank you, the obligation should be tell the fbi. Second, we ought to make sure paw we can improve the security of our voting system, but if at the end of the day, as the chairwoman knows, and in many ways the integrity of our system relies on peoples confidence. One of the things we can do to improve peoples confidence, make sure there is a paper ballot backup for every polling station in america. Make sure, number three, we pass the honest ads act so we know who is advertising, and the same Disclosure Requirements that you have on television and radio. Number four, lets go through some of these naughty issues around whether identity, section 230, kind of basic rules of the road for the social media platforms so, again, we have we have i think better protections and recognize the our elections ought to be decided by americans trying to influence each other. If theyre outside forces they at least need to be identified. Now, in terms of in term of security concerns, this is an area where we have been extraordinarily lacks, not just in the the short term but for a while. The fact that there is not even theres never been the notion of a reliability regime around software. Im not saying we need to head there, but im say its a pretty interesting commentary. The fact that we are literally buying got Bipartisan Legislation i think will get through this year that our United States government is buying annually billions of internet of things, connected devices, and weve not put in place a security standard for these security devices. The amount of costs it will take to rip that out if we find vulnerabilities downstream is extraordinarily challenging. I do think and this goes into the private sector. Were almost well beyond just the elections. You know, im not sure the United States alone can set, for example, cyber norms. I think we need, again, an International Framework that basically the same way we said certain things around chemical weapons or land mines were inappropriate, there are tools of Cyber Weapons that also should be inappropriate. And if nation states or others use them then the then the attribution requirements and the ability to kind of punch back could be elevated. So i think there really need to be an engagement with the private sector. This has been an area that has been for too long an afterthought. But the potential vulnerability that we have is enormous. I keep thinking of, you know, the Nuclear Power plant where were going to spend, you know, millions of dollars ortons ten millions of dollars protecting against and the bad actors may come through microwaves in the kitchen because its an iotconnected device. The Community Needs to be more engaged. Listen, this is much more fun than what im doing on the normal floor of the senate these days. Ill take two more and then get out of your hair. Im sorry. Alex howard. Nice to see you again. Thank you for working on these issues. Two straightforward questions. One, should the information quality act apply to government weap web pages and Government Social media . Second of all, what what do you think should be done with respect to political campaigns using [ inaudible ] could that be say accomplished with this [ inaudible ] or an agency that takes business from campaigns. Should you disclose [ inaudible ] going through and pursuing some of the technical on the first question, i could try to give you a very extended punt answer. Im not sure i fully know what the information quality act is. So shame on my bad staff and me. So let me get let me actually have a little technology before i just try to punt the answer. And on the second, i dont this is a really i would hope in terms of standards of behavior within the political process we could start with kind of an agreed upon code of conduct that campaigns would all voluntarily agree to. You know, i think in this area starting with kind of id hate to say industry selfregulation but some front end that is a voluntary agreement and then potentially have back end in terms of malfeasance. Particularly, you know, whether it would be ftc or others in terms of inappropriate business practices. I think could be debatable. But i do think we need to start with some agreed upon code of conduct. So far at least, im not sure all the democratic campaigns have fully agreed. A few of them have. Others havent. I think thats pretty remarkable that there has not been that agreement in terms of in terms of what we would all agree would be malicious behavior. And the question we all have here is, is, you know, its one one thing to use tools to try to generate interest and followers. How we draw that line between, you know, using sophisticated techniques in a legitimate way versus the kind of absolute misuse, that line needs to be still, i think, a little bit more clearly drawn. Do you want to pick the last one, ellen . And then ill hi. My name is michelle. Im the executive director of the National Association for Media Literacy education. So, first, i want to say thank you for having this discussion. We i would say that we saw this coming. A lot of educators, a lot of scholars, so were so happy that these conversations are happening at this level. I am taken by how often im in conversation about disinformation that doesnt include education. And i dont know how we solve this problem without taking a very hard look at our Education System and how were preparing our students to succeed in this world. And i i dont want to be negative, but i think were doing a very bad job at it. So i just want to get your take on how do we have more indepth conversations about how we are educating the next generation . First, want to commend your work. Two, i think, you know, a basic component of k12 education ought to be Digital Literacy and ought to be able the ability to have some kind of basic ideas of how you spot false content, inappropriate content. I think id like to i think there is much here we can learn from europe. Where a country like sweden, for example, rather than trying to put full legislation in place i still only i dont know how well they fully did on their education of their of their country before because they were fearful of russian interinvestigatiointervention. I think there are lessons there, particularly from estonia. There is a natobacked center that is in this space. And when and i believe, i mean, would probably know this since he knows everything about the budget, but my sense was there were tens if not hundreds of millions of dollars that were going to be dedicated to that disinformation nato center that maybe were part of the great grab of the 3. 6 billion taken for the wall. Im not sure thats making us necessarily safer. Taking american commitment away from that kind of international collaboration. So i think, you know, weve got to get our practices best practices. I do think we need to kind of just as weve seen a Civics Education and other things happen at a state level, as a former governor, i think the idea of you guys and other groups advocating Digital Literacy in a k12 curriculum is increasingly important. And then finally, one of the things that two last comments. One is, i would argue and our fail our to act on these issues around disinformation, misinformation is, i think, indicative, unfortunately, of a larger retreat from our country. Not just during this administration. Of america being willing to not only do the innovation but then set the standards and the protocol. I would argue since sputnik virtually every major technological innovation, if not invented in america it was invented in the west. We still set the standards. And increasingly we are walking away from setting those standards. Now, in the case of social media we have seen, you know, the eu step up, weve seen the uk and australia on on content, eu on privacy, individual states, but were also seeing this in a host of other areas. You know, im a telecom guy. The standards on who sets the standards on 5g the equivalent of radio to television, if those standards are going to increasingly be set by china, that is a huge security concern for us. And as we think particularly china that has a very different mindset about how to use these Technology Tools to build a surveillance state that would make, you know, orwells 1984 plush blush in terms of how much more sophisticated it is than that is something that we as real policy makers dont talk enough about. Which brings me to my final point, again, and i say this to my friends from the Platform Companies heres. I peopfeel that theres been a general feeling from the people of the Platform Companies of kind of playing ropeadope with the congress. You know, were interested. But when we get to the details were not really interested in those details. Anything that really affects our Business Model were going to get hesitant about. And i think at the end of the day that is going to be a huge frankly, a huge business mistake. Because were not going away. And in a sense, whats going to happen, all thats happening is the eu and individual states are simply raising the floor. So the that when we do actually legislate youre going to end up with a much, much tougher regime than you would have if youd actually worked with us in a collaborative way on the front end. And i think you could end up with a model that will be much more detrimental, and at the end of the day one of the reasons why ive been i have been concerned and some of my folks on my staff get really upset with me, these are at the end of the day all international platforms. And i have been worried about simply replacing western ones with chinese ones that have no preb protections at all. But i, again, appeal to those of you who are from the Platform Companies. You know, you need to come to the table not just with wed like to work with you and occasionally let academics see small pieces. You need to really be part of this conversation. Because were not going away and the wild west days are over. And we are, candidly, one significant event away, and it may not even be in the realm of politics, the ability to have market manipulation, frankly, is greater than even the effect it could have on elections. We are one significant event away from potentially congress overreacting because the next event could be really extraordinarily dramatic. So i again thank all of you for being here. I particularly want to commend ellen and everybody at the fec and partners from panamerican and stanford for this kind of gathering. It is so terribly important. Back to the earlier question, i dont have by any means a lot of this figured out. Were going to have to do this jointly and i look forward to the results of todays discussion. Thank you all very much. I dont know how to thank you. Cant to anything without staff. Well, that was really great and inspiring and going to be a tough act to follow, but we have some really great speciakers to come. I want to say that we got some people standing. There are some seats in the second row. We kind of overreserved to be a little bit cautious, but people can take those seats. I have to remind everybody, if you havent noticed by the plethora of cameras in the room that this symposium is a public event. Its open to all. Its being recorded and live streamed. The record will be made publicly available. If there are discussions penning any rule making they may be related to the rule making record. The wifi password is posted. Its on this doors, a couple of the pillars. Im sorry the print is a little bit small. Restrooms are across the hall. So thats all the housekeeping matters. Our first panel is on how disinformation and new technologies affect the ways people think and what we are learning from the global experience. It will be moderated by eileen donahoe. At this time, ill ask the panelists for the first panel to take their seats. Take it away, eileen, and thank you all for coming. [ applause ] so, let me start by giving a great big thank you to ellen winetraub and suzanne nausle for joining forces in this event. Please, have a seat. It has been really great to team up with you both, and i really need to give a big thank you to the staff of both the fec and panamerica, especially tom melia. You did the lions share of work here. So i want to say a couple of words about the goal of this event from my Vantage Point and then well talk about this opening panel. So the goal of this event was basically to help generate Greater Public interest and greater political will to combat disinformation in the leadup to the 2020 election. We see the disinformation threat as a bipartisan issue, as senator warner said, in that the disinformation threat is simultaneously a National Security issue, a Cyber Security issue and a threat to our democracy. As he just laid out. The animating energy for us today was our shared sense that there has been an inadequate level of public outrage or official response to the foreign disinformation threat. At the same time, we recognize a big part of why the disinformation threat is so confounding is the domestic factor and the bizarre ways that foreign disinformation mixes with authentic civil discourse, domestic media, political commentary and with the speech of our own elected officials. Free expression, access to information are supposed oh be the lifeblood of democracy. Political speech is supposed to be the most highly protected form of speech in our democracy. But this speech is now being turned against free society and undermining the quality of discourse and confidence in our elections. The peculiarly confounding part of all of this threat is that most disinformation around elections is neither false nor illegal. Its often a combination of factual information that has compelling political framing to give the facts emotional punch, but this framing does not necessarily turn these facts into falsities. Or theyre not technically false. Even more so, sharing false information is generally not illegal in a democracy. And given First Amendment doctrine, contentbased restrictions on the basis of falsity simply will not pass the congress or u. S. Supreme court scrutiny. There may be some limited carveouts for false information related to procedures, et cetera, but we will hold that nuance. The basic point here is protecting the integrity of information around elections, which is different from infrastructure security, data security, security of campaigns. Thats those are really important issues. Our focal point is disinformation. Disinformation is a much more nuanced and kplecomplex challen. Many different stakeholders impact the quality of civic discourse and many stakeholders will be needed to combat this threat. To the private sector, Digital Information platforms and social Media Companies do have a unique and substantial role to play given how they affect the speed, scale and amplification of disinformation, as well as the very complex crossplatform dynamics that they know they know better than anything else. The big question is, what between the private sector digital platforms to do and what is their responsibility in protecting Free Expression . As senator warner just said, robust moderation is a good idea. But we need to tread really carefully in asking the private sector to do what we do not want governments to do, which is to take down content based on a political assessment of truthfulness. That is what authoritarian governments do and we do not really want to go there. From my Vantage Point, if we push private sector Tech Companies to undermine our own core Free Expression principles in the name of protecting democratic processes, we actually end up serving the ends of malign foreign actors whose goal it is to erode confidence in the feasibility of adhering to our own Democratic Values. And ultimately in democratic governance itself. That said, we cant sit idly by and undermine the important right equally important right to democratic participation. So this is the tension were here to help resolve. There is no single obvious lever for combatting disinformation. And the bottom line is this is an all hands on deck moment and we need a comprehensive societywide approach. Our hope for today is to flesh out some nonregulatory solutions, assess some of the tools that are already being utilized by Global Stakeholders and facilitate greater crosssector coordination. But probably most importantly bring the public into this conversation and help build civic resilience to disinformation, which goes to the comment made at the end of the last segment about the importance of Media Literacy. To this opening session, our goal here is to place the u. S. Election integrity challenge in a global frame. To see how the u. S. Election threat is one very important data point in a much larger global trend. We have an embarrassment of riches in terms of our speakers and theyll each do short presentations and look at how Digital Technology is being exploited globally. Well hear about some Brain Science behind how humans process disinformation. And well talk about a range of counterdisinformation tools that have been tried around the world and assess what may be transferrable to the u. S. Im going to briefly introduce our speakers. Starting with the honorable Michael Chertoff, former secretary of Homeland Security, who now cochairs with anders rasmussen, the former Prime Minister of denmark. We have lisa, assistant professor at Vanderbilt University who conducts research on how people process fraudulent information and peoples perception of truthfulness. Camille francois is chief Information Office at graphica. She works to detect and mitigate misinformation and media information. Mr. Wolf is googles Public Policy lead on integrity policy. Susan ness, former fcc commissioner, is now a distinguished fellow at the Public Policy center. She chairs the transatlantic highlevel working group on content moderation online and freedom of expression. Of which both michael, camille and i are all participating, and last but not least, nate miller is legal director for and hes going to speak about their counterDisinformation Campaign ahead of the 2019 European Parliamentary elections and see what lessons we can learn. After they speak, each for five or so minutes, well take turn to all of you for your questions, comments and experiences. And we will start with Michael Chertoff. Thank you, eileen, and thank you, madam chair and everybody else for sponsoring this event. So just to dive right into it. If you look at Information Operations or what they used to call active measures during the cold war, which are efforts to use information to undermine the unity of effort of your adversary, you quickly recognize that this is really a domain of geopolitical conflict. And, in fact, theres a reference to this concept in the socalled doctrine based on a speech given some years back by the chief of the russian general staff arguing that a domain of warfare involved manipulating the minds of your adversaries so that you undercut their ability to resist. And essentially disrupted the unity of effort. I should say that this is not just about elections. The effort to use or weaponize information for purposes of undermining your adversaries unity of effort is more broadly applicable broadly applicable to any element of democracy and freedom. And the idea is to paralyze your opponent and to make them shrugged their shoulders and give up. This is not new, if you go back 100 years to the early stages of the soviet union, when they formed the calm in turn, from the very beginning the ideology of communism saw the ability to propagate confusion and to undermine the moral of your adversary as an element in communist efforts to dominate the world. So during the cold war we saw propaganda and other efforts to manipulate target populations in order to achieve results that the soviet union then wanted to do. We see it even in the post soviet union. If you look at what happened in europe in the early part of this century, the first few years, while you will see our russian efforts to use information manipulation, agents of influence and money to drive behavior in a way that would favor parties or politicians that were viewed as pro russian. And the opposite for anti russian. If you in the first decade of this century, when her right wing party was experiencing financial difficulty, thats one example of this kind of effort. Frankly the use of invest, by russian oligarchies in europe as a way of propagating, influencing favorable to russia, was another dimension of this kind of Information Operation or active measure. So what is different now . Why are we so focused on this . And i would say, there are a couple of things that have changed, that havent undercut the basic information or disInformation Operations, but that have made that much more dangerous and difficult to control. First of all, the vibe of information that is out there, and the number of different sources has really amped up over what we saw even ten years ago, certainly 50 years ago. A lot of that is social media. And the ability to use all these different avenues to influence what people read and what they hear. And to drown out voices that may be inconsistent or contrary. Some of us are old enough to remember the days when there were three television networks. You had walter cronkite, etc. They basically tried to balance. They were not perfect. That was your set of choices. Thats very different now, there is no arbiter or, its probably the russians, anybody out here to be kind of a validate or. That is one element. Social media has also amplified the ability to use Data Analytics to technical problem audiences. You no longer have to speak consistently to everybody, you can make sure your message is targeted specifically to people who are interested in that. The fragmentation, as i said earlier, of the mainstream medium, media and the effort now to drive revenue by getting people to turn on to your particular media outlet means there is a tendency for the Mainstream Media to amplify messages on social media. In many respects what you see on twitter or facebook is really just the invitation to get Cable Television or talk radio to focus on a particular plot line technical problem the release of that data allows people who want to get access, information which they can publicize or distort, in order to get, to drive home a particular message. I should also say that it is not just productions at fault, although if you read the Mueller Report you see how blatant the efforts were to get involved in the 2016 election. I have seen operations coming out of china, iran, i know from you you will talk green, talk about this in greater detail. We have to be honest, its us, we are promoting a lot to promote the disinformation ourselves. Sometimes we are doing it that the encouragement of foreign actors and sometimes we are doing it on our own because of various extreme views. Which foreign actors then amplify and use tools in order to propagate. But its not just the adversary, it involves our own citizens. Some which take extreme positions and then exploit these techniques, these kinds of technologies in order to propagate. I had to conclude by saying there are two things on the horizon, for even closer than that, that we need to think about, communicate this to the next level. One is the use of Artificial Intelligence. Again, the amount of data that is out there, about what people are interested in, which is critical for microtargeting is so vast that no team of human beings sitting in st. Petersburg or beijing could analyze it in realtime if they didnt have artificial help. That is what Artificial Intelligence is about. And those of you who are following cybersecurity generally know for example, if the chinese in particular have been accumulating vast treasure troves of data about citizens in the United States. Some of it is stolen, some of it is legitimate. But all of which is correlated and used to allow somebody to target particular people who maybe some sub doubled to particular messages. This challenge is only going to become more acute. The other thing i would say is this. We are talking about this information to affect the election. I dont know if i am concerned about efforts to tip people from one candidate to the other, as much as i encourage voters. But here is an even more challenging questions. What happens after the election . If there is a Disinformation Campaign . Lets say its a close call. Lets say there disputes, take your mind back to 2000, with the bush core campaign. Now imagine that occurred again with something similar. And then you had a concerted effort to drive disputes and disbelieves about the outcome of the election. That can affect not only the publics confidence in the outcome. But it can operationally affect the ability of the u. S. Government to function over a period of months, which of course would be a wonderful gift to our adversaries. We need to start thinking now about, what are the ways in which you can validate and adjudicate the accuracy of Election Results so we dont have 2000 on steroids. Thank you. Thank you. I will lay down a couple of markers of things that you said that are really important. The geopolitical dimension, its not new, disinformation is not new but there are new dimensions related to social media. You highlighted the important dimension of dock seeing and professional media rick courting on documentarian. I think disinformation isnt always about changing peoples voting, its about suppressing the vote and or eroding confidence in the election outcome, which could end up being the biggest problem. So now we will turn to lisa physio, learn a little bit about how information is processed in the human brain. I think i should have slides are popping up any second. All right. I mahogany psychologist, a study how you process true false information in our brain, it will mix things memorable and how do we decide what is true and what is false, so what i want to talk about today is why misinformation is a problem, so we are all smart people why can we realize something is false and theyre not have it reflect our belief, why is it still changing our minds in some ways, so to start im gonna put my little teacher professor had on and i will ask you questions, so audience participation, you have to yell out the answer in the buckle story what was jonas followed by . Big fish well timmins how many animals of each kind it closes take on the arc. Noaa. So much, so most move of you yelled out to even though all of you know that there is no law but not most as you the animals on the arc, so this is something that we call knowledge, we often have knowledge in our house but we fail to use it in a given situation, we fail to notice errors and what we hear and they can affect our later thoughts and believes. So imagine a friend tells you some new interesting facts, used to be this might happen in person now its likely to happen online, you are browsing your social media, feet how do you decide if what they told you is true or false, at least two ways you can go about doing, this one you can think through, given what i already know about the world or you can use the quicker and faster way of just going with your gut, does this feel true . So we know in a lot of situations what humans do is take the easier and quicker path and go with the gut reaction, so in 30 years of research have shown that we use our prior knowledge to determine truth but we also rely on a lot of other cues, one of those cues is how easy a sentence is to understand or process, so the fluency of which you can comprehend a sentence and one of the big things that it increases that fluency or prostheses is repetition, so the more time something is repeated the more likely we are to think it is true. Like i said hundreds of researches studies have shown that repeated facts are thought to be more true than things you only heard once. In fact this happens even when you have prior knowledge, so we have done a few studies looking at, how that the cyclists was a legendary giant and greet mythology, if you read this sentence twice said they minutes are is a legendary when i giant and greet with algae, they think it is more likely to be true if theyve only heard at once, so that repetition is increasing your beliefs in the false statements even when you have the prior knowledge. People have also done research on this using typical false news headlines, so pro and anti trump and clinton fake news, when you repeated people think it is more true, most interesting to me this happens in regardless of their political beliefs, so whether the statement matches or goes against your actual political beliefs you still think that it increases the truth with repetition thing. So why does this happen . One of the big reasons, is because it is really a fruitful to consult prior knowledge, instead we just go with whatever is good enough or close enough, we think its true, so our brains are imaging machines, even the best way i can barely do what human brain does in milliseconds like look out into this room and recognize the seat and everyone in it but our brains are really lazy, they dont like to work, they take shortcuts whenever they can and one of the shortcuts is when something is good enough, it just is assume that its true and weve done, so that makes it really difficult for us to notice errors and what we read and hear. How can we stop it, what can you do, so prior knowledge isnt enough in and of itself, i just showed the monetary example, even though you know its cyclops he still think it is true when it is repeated, so if i had asked how many animals of each animal didnt extend take on the arc. So there is prior knowledge that does help, there is prior knowledge that is thinking deeply and critically about what we are reading, taking a second to positive thing, how do i know this is true . Where is this coming from . Things like that can help, so giving people accuracy focus, accuracy norm is useful, and having people think about how do we know if this is true or false. So in one of his chats, Franklin Roosevelt one said that doesnt transform ally into a truth, while it is true on the face of, it you can actually he is ryan wrong and what it can do in our head in our minds, it turns out repetition does have a strong impact on what we read and believe, thank you. applause so hopefully we will come back to your point about how we build resilience and deeper Critical Thinking and an accuracy norm, this again goes to the question we had earlier about Media Literacy and how we build a more effective program. We are now gonna turn to camille francois. Thanks, i want to talk briefly about foreign threats and the global for higher market of disinformation, and i just want to put a few seats that we can return to in the conversation together, when we think about foreign threats to elect pro integrity i think a lot of us think of russia, and i think there is a little bit of russia fatigue almost, i do want to talk about russia just a little bit because i think it gives us a marker for what we have learned, senator warner said this morning, very eloquently that the u. S. And Silicon Valley were caught flatfooted, so my question is, what we learned now, what do we know and what are we still ignore . I have spent an ungodly amount of time looking at the very details of the russian interference campaign against the United States and against other nations when i was at google working with the Intelligence Committee in 2016, 2017, and again in the midterms 2018. So there is something perhaps a bit surprising that i want to discuss with you today, there is still a lot that we dont know about what happened in 2016 and for all the headlines and for all the data that Silicon Valley shared, for all the academic reports and discussions and hearings, we still have major blind spots in truly understanding how russia targeted the u. S. And what we have to learn from that, so i can go a long time listing all those wide spots but i want to give you two examples, a lot of people think about the Internet Research agency, the troll farm that was based insane petersburg with people on staff creating messages, targeting specific communities in the u. S. Across social media platforms. Now the ira was not the only russian entity that was involved in producing and disseminating disinformation on the media targeting americans, well cereal caught a valley has shared lot of details and data on what the ira did and we have those opposed we dont have a correspondent collection of what the two year you did, now that matters a lot because the gru is actually the more funded and the more persistent actor, its easier to just dismantle the ira as a network and what it is to dismantle the gru, it also matters because the gru is responsible for some of the most complex techniques that we saw in this campaign, for instance the hack technique where you have someones email and package it and make it into a leak, those are the techniques that really attacked this as a society we you know fragility israel, the taxes exactly where we lack in our own democratic processes, it makes it very complicated for journalists to say oh, those are leaks, should i cover, are they actually coming to me from a former source, how do i handle this . So that is one example another one that immediately comes to mind is also the amount of technical problem i worked with activists who were on the receiving end of the Russian Campaign in 2016. A lot of this targeting was done through direct messages. We have never seen any of those. We dont have a sense of how much of that activity happened. And that is so much more insidious. Much more pervasive. In some ways much more powerful that in some of the public posts that we have analyzed. And so we have to recognize that some of these techniques, there are still things that we have not unpacked. We have not publicly discussed. The direct messaging of course is a tool in targeting the media, a key tool. And its often targeted by foreign adversaries. So that is it for russia, i promise, i know people dont want to talk about russia all day every day. I could if i wanted to. The other thing i wanted to say on the foreign adversary side, Michael Chertoff touched on this. There are other adversaries in the game. Sometimes i hear, other people are doing what russia did. That is historically untrue. I think its painful for us to recognize this but others have been doing this for even longer. I think its a testament of how many of us have been asleep at the wheel. Iran started the, started targeting the u. S. Public on social media as early as 2013. So this is really us waking up to the fact that a series of foreign adversaries have been using these techniques to target our conversations for a very long time. We are seeing a lot of detailed talk on recent i raining campaigns, which is good, we are seeing the first points calm on how china is using social media to target american constituency, Data Available on saudi arabia, how they have also built this apparatus. All of these have their own strategic goals, preferred techniques, targets, communities, technical problem what are the telltale signs of a campaign that may come from here or there . These adversaries are also engaged with us, they are masking. Something that was really interesting around midterms in 2018, where we saw on russia and specifically the ira come back again. As we were given the way to really assess how much they had progressed against our own systems and defenses. They were better at hiding there choices. Some of their new tactics, were new, they were frankly insidious. This is something that we are going to have to be engaged with in the long run. A little bit of a cat and mouse game. That being said, if we want to play that game we have to be very straightforward about what we know, what we have learned and how things are evolving. Very quickly, on the presence of a large for higher market, this part is a little bit more fun. Its a bit depressing but looking at all the disinformation that you can buy online, and that people are selling, its quite disturbing. The far higher market of disinformation is growing every day. It has small players, people who hack into other pages and resell them, very large mercenary like wellestablished above board players as well. technical problem what it is now, so if you look at a country like the philippines, its technical problem what those farms are reporting our so really what we are facing is a far higher market around disinformation tactics. What do we do about that . Those two, those are important to tackling the problem, the first one is the detection, working hard on better detection techniques. The idea is to find the forensics signals that betray that something is wrong and that someone is attempting to manipulate the public discourse. Sometimes it can be through box, if you have, if you dont have a large budget and you want to do a Disinformation Campaign technical problem , if you have a bigger budget you can do a little bit more of a subtle campaign. Now this is not going to be enough, because frankly this for higher market is close to what firms are doing, new technologies are being developed and there is a lot of gray area. Ill give you an example, in the midterms in 2018 when we saw a candidates campaign. Which suddenly was pushed by a lot of accounts, saying the exact same thing at the exact same time. That created massive chaos in Silicon Valley. Everyone came to us saying, this candidate, maybe he hired a troll farm, a box, this is disinformation and we need to take it down. In reality, with the candidate has done it they had bought an app. Their supporters had downloaded the app and granted access to the app to their own social media account. And had agreed to participate into this one push campaign. They download the app willingly, granted access. This was all part of something they had agreed to. That gave Silicon Valley a pause, saying, what we do with that . Is that okay . Is that coordinated in offensive behavior or is that exactly how people are going to be campaigning today . I think because of our lack of serious dialog on what we are willing to accept on social media or not, we will find some increasing amount of gray area situations like that. As we head towards 2020. I would really encourage a serious conversation with candidates, with parties, with performs, on what is unacceptable practice on social media . What borders on disinformation . What is simply a modern and creative use of Digital Tools . We are bound to have very complex and difficult conversations, that will not have trust in our institutions. So the two big things there, i would come back to, keep this in mind, how worried are you about coordination between all of those adversaries, russia, iran, china, saudi arabia, will they be coordinating . And then, for all of us we really have to be thinking about the norms in political campaigns and what counts as unacceptable political strategy. What do we want campaigns to be doing, how they use reality versus what counts as inauthentic, coordinated, manipulated behavior that technical problem good morning everyone, the comments so far have been very insightful. Obviously for everyone on youtube the challenges raised by malicious actors will try to deceive our users to harm them. They are both contrary to our mission which is to connect them with useful information, but also to our business interest. We simply cannot let third parties of user platinum, platforms in in that way. These are the kind of challenges we have been dealing with since the very early days of our platforms. Not necessarily disinformation, but in ways in which people either try to elevate their content, in ways that were inauthentic, scamming, making a profit, trafficking, cross linking websites to make them seem more relevant and useful than they were. And so on. So the type of challenges have been on our mind for a long time. Obviously the challenges raised by disinformation and information analytics are on the top of mind for us. And they have been over the past few years, we take it very seriously. I would add to the points, the excellent points of some of my we take them seriously during but not only during elections. We tried to expand this, beyond this group, precisely because the kind of actors that are in these Information Operations do not wait for actions to begin, and try to see the issues. They instead use quiet time to plant their efforts and to plan ahead. So we try to stay ahead of that. I will tell you very briefly about the high level bridges we employ to thwart these efforts at google, with the understanding that this is an arms race, there will never be a point in which my job is done. Every time we do something new, we learn about something new as well and we have to continue building up on that. And then i will go into some of the emerging threats we see around the world and how to handle those. I am more than happy to dive back into these points during the queue and a. In terms of how we approach these issues at the community level, we have three major types that we tried to employ. Connected with the industry, educators from literacy, newsrooms, but specifically about our products, the first way we tried to address those challenges by designing a system so that they make quality can. They tried to form an understanding of what sources are authoritative on this issue and then that is a challenging endeavor for another reason, it also turns out, one of the Solutions Built on to begin with . Even though we dont have a perfect concept, to give you a sense of the scope, we will search for instance, more than 3000 from last year so its not a done deal but this is what i have been dealing with for quite some time. Of course when it comes to and that is most likely to be harmful and not severely reduce the spread of the contents, that is making body counts, then of course there is just trying to understand what kind of tactics and behaviors are malicious actors going to use too to save our systems and to game, so we have had some time now for all of our platforms, rule of the roads that provide a sense of what is permitted and whats not and systems automated and trying to catch creators who would infringe on the policy, is what him at yours is that they band behaviors like the kinds of behaviors you news if you are a malicious actor and you had created a piece of content that you think really want to propagate really fast and you try to give the system turns out the kind of inauthentic activity that some of the police were referring to, so quoting these and authentic forms is something we invested significance and we also have policies about significant tactics that we still invest a lot of research and two and have teams looking at, such as impersonation or misrepresentation, and for instance its not okay to misrepresent ownership or in person other channel on youtube, so that is the second layer, towards malicious actors, third layer quickly is trying to provide context where you do so far as we can by providing them at the right time in navigation with the information that is happening in their own minds about what they are looking for, those things are knowledge balance or information panels, you see the information panels on google that will show you for instance whether a broadcast channel was taken from there, you see those where the top news or breaking news channels on you to, be also of ways to provide to Contact Information and give me more perspective or holistic view for a full coverage function on, what we have on an issue sometimes you have a timeline that is a way that is not personalize and explores all the facets that we have available to show, you so those are three ways we try to counteract the efforts of malicious products, its always a working progress, we doubled down on these efforts when we know that they are challenging moments ahead like elections and other types of similar events where we know there are, we are likely to see more tax to our systems and moving to the emerging challenges parts of this conversation very quickly, it has been interesting to observe over the past few years off we have seen variations of these challenges in numerous countries around the world, how much the local specificity of each society in which we operate matter in this space, a simple example is the notion that in many places around the world Group Decisions apps are a significant part of information discovery. In ways that are more pronounced than here in the United States. That changes what the victims of attack for actors, it still means we try to get to our platforms and remind you this at from pride and process for the different goals, different tools, we have to be mindful of that as we expand into each new country, we try to stay ahead of their goals, it does not mean we have start from scratch every time we do have to be mindful, that it cant be an exact cut and paste. The last thing i will say is beyond those two local specific needs, there are two other flags for this group. One is there is a rise in concern around media, whether those are a generated and that is a new challenge for us, youtube has seen from its very early days individual manipulating the content of videos by editing them and spicing them and so on to deceive users. It is certainly something we are mindful of because it is quite costeffective for the malicious actors who use them and they can have some traction, so we do have policies around those to make sure we can work them and have system trying to catch those but it is at the top of mind for us as a rising threat. The other one is allergic platforms get more friction for the operations of the various actors, it might be for them to give up their own spaces in which they cant reach a smaller audience, but can reach scale faster and with less friction and then try to come back to a larger platforms. That is not something we have observed widely at this point but its an example of the many ways we try to see what they do next . And how do we stay ahead of that . And trying to monitor and stay ahead of threat actors by looking at what they do and understand how it may impact our systems. To point i want to underscore how are they struggle anyone now counts as an election contest because the time so that happens way before and stuff, that happens after all matters to the integrity of the election. The other question i would ask, this point of making quality count and elevating quality, what kind of resistance you had to say approach from domestic political actors. Who may resist your assessment on the basis of quality. I guess we will now turn to suzanne. Thank you elaine, if you begin with the premise that freedom of expression is fundamentally democracy, than government mandated removal of content deemed false and deceptive is a challenge for democracy, western democracy are stymied in how do you tackle content that is odious, false, manipulative and disseminated with the goal of dividing society and destroying our faith and institutions but it is not illegal, most of the lessons that we have learned, that the group that i formed has learned and looking at specific laws, regulations, proposals, initiatives public and private, really point out what not to do, its hard to come down to what actually will work well protecting freedom of expression, but we are Getting Better at it, governments often lumped together hate speech, disinformation, viral this exception in an effort to regulate platforms, some european Member States including the uk elise for the next couple of weeks and france have proposed new regulatory regimes using as a models, either Financial Regulation or broadcast regulation in, the uk white paper released this spring who had a new regulator that were not placed truth on the internet but what intern great the harm and online and manipulation which constitute as they said threats to life, it proposes to focus on platform behavior rather than content itself, recognizing its hard to remove all harmful content and catch it all. Platforms would owen illdefined duty of care in the public and they would be evaluated on whether they have taken proportionate and proactive measures to help users understand the nature and reliability of the information they are receiving and to minimize the spread of misleading information. They would be subject to code of practice for transparency, improve clarity for advertising, we are seeing that is a major theme across the board, cooperate with back checkers and boost Authority News but making content by reputable Fact Checkers less visible to users. All this is good and the focus on harm to society will clearly have a Chilling Effect on free speech and platforms, more likely to review more content to avoid heavy fines. The amorphous duty of care might lead to proactively monitoring legal content, which again is very troubling. The Uk Parliament is expected to travel adulation this fall that would set up a regulator and address illegal content, levy the question of legal content that is harmful for a later date, in france they acted a law and 2018 to address manipulation of Information Online around elections, it imposes strict rules on the media, for three months leading up to an election and gives the authorities the power to rethink content, spread while media tent locks eyes that publish it, they require platforms to see you purchased political ads, the amount of money that was involved, candidates can sue for removal of contested news stories and importantly, judges will have 48 hours to rule, but it does go through a traditional process. Last may the french government also issued a paper proposing innovative regulatory regimes that would focus on platforms behavior, again not content, it would examine transparency, terms of Service Enforcement and redress for those who were harmed, it avoids regulating the content itself, the European Commission has put forward a package of activities to address disinformation in the run up to the eu or the European Parliament election in this past, and expanded Digital Literacy, supported body journalism, it elevated Fact Checking and provided a network for factors and it also had a code of practice, the commission itself between that it does not want to be a ministry of truth that is an impact certainly have having lived under communist regime and then parts of europe, the code of practice is a self regulatory measure, it applies only two platforms and its only a handful of platforms and advertising trade associations who actually signed on, it encourages transparency, Media Literacy, research, access to data and add transparency, it causes the exhilaration of counts and labeling pots and prioritizing relevant reliable information. It created a greater cooperation between flat forms and the government, and it technical problem which is the news version of a safe harbor for platforms, so in conclusion, basically there is no and looking at efforts that have been taken so far, apart from some of the nordic and baltic countries which basically have avoided content regulation in favor of public education, Fact Checking, and encouraging quality, journalism they have no Silver Bullet to address this information in a matter that is true to freedom of expression, the thing is transparency with respect to political advertising that needs to be pursued, and its best to address behaviors and actors as opposed to the content itself, thank you. Thats great, i want to get the audience thinking, after our next speaker were gonna ask you to come in, but really grapple with these questions, susan raised about how hard it is to craft a regulatory response that actually works in preventing this information but doesnt undermine Free Expression and also what you think about this distinction between not regulating on a content basis but going after manipulative behavior, does that distinction make sense to you so again reflecting on that, our last speaker is nathan miller. So i couldnt ask for a better introduction than that because i am going to talk about content, i want to talk today about a policy solution and i think can help us bridge the gap between the Democratic Values of the freedom of expression and ways that we can combat disinformation, i will talk about this in some works that a vase did in europe in the election of the european, briefly for those ahead dont know us avaaz, we are exclusively funded and members take very active role in helping us decide what issues we work on and what campaigns we launch, our members are deeply concerned about the threat that disinformation poses to our democracy, to the extent that they funded a Large Program that we launched in europe, the european house to combat the trolls, so our team of elves, i wont labor the point but we looked, we discovered in Disinformation Networks in a six european countries, facebook took action, not on everything that we reported on but of the posts and groups and pages that facebook took action on, me say those groups reached approximate 750 million views just in the three months leading up to the european elections, i want to poet couple of important things out of, that the first is that any reports that you may have read about the demise of disinformation have been greatly exaggerated, second i want to mention there are, ways this demonstrates that the society can do detection and can be reporting, there are ways to optimize that, the top one i would say is lines of reporting, it was difficult for us to get facebook to take action in the timeframe that was relevant to the Upcoming European elections, we were able to do that because we have a certain measure of access, its not a formal partnership we have relationships and were able to bring a certain amount of pressure or threat of pressure to bear, thats too high a bar that needs to be better and easier for Civil Society to report information that they found and getting taken care, of the most important thing that i want to pull out of this, we dont know how many humans, how many views, but how many tens of potentially hundreds of millions of people saw this toxic contents, this hate speech and disinformation so when we talk about the erosion of faith in our democratic institutions, we are talking about individual members of our society and we are not currently doing enough to counteract the effect of disinformation it is already gotten out there, the solution i want to offer us today is correct the record, it starts from the simple premise that of the tens or hundreds of millions of people who saw the content in europe, really the only entity is capable of reaching them and capable of letting them knows that they were duped our platforms themselves, so correct the record is the very simple idea once the piece of content has been verified as this information then the platform could and should let each person who has seen liked, commented on, interaction with shared that piece of content that it was just information, that it was false or misleading, this approach has a few advantages, first and foremost it is not censorship, there is no ministry of truth, asking mueller the private sector or the government to determine what is false, correct the record at the truth but leaves the lie alone, they are free to post and share what they like its just that the posting and share it is going to be accompanied by invitation that this has been defined by a professional Fact Checkers. Another advantages that it works, there is a significant amount of research out there that shows that exposure to corrections after exposure to disinformation causes people to stop believing the lies and start leaving the truth, and significant measure, event this information is congruent with their ideological beliefs and that will have to cause him to shift a little bit. And avaaz has been doing our own internal testing with correct the record with our members and have seen Exceptional Results between control groups and groups exposed to corrections. We have also done Qualitative Research and and one of the things that we found picking up on what my copanelist lisa said, just seeing a correction engages people faculty, is not only the original disinformation, but they start to interrogate the correction itself which is ultimately good, and it has some challenges certainly, one some other folks up here have noted a significant amount of disinformation an election manipulation happens on private encrypted services, also this very difficult, i wont say impossible, very Difficult Solutions to get on messaging service and the other is that we would need to massively scale up the Fact Checking sector in order to get enough people looking at enough content, what we have now are groups that will certainly tackle the highest value, most, most difficult pieces of disinformation, that they are particular interest in my crazy uncle uncle factual statements, but in conclusion i want to say, there is no conclusion. This can be technical problem a culture that does not want to be duped, a new those where that becomes the essence of a Media Literacy campaign, so we would like to now turn to all of you and see what kind of questions you might have for any of these panelist on any of these themes, we would like you to state your name and organization if able, and the mic will be passed, i see right, there jim. My name is jim i was a longtime journalist and i nasty at stanford, i appreciate what youre doing on correct the record, it would be interesting to hear on professor lisa fazio, what happens when you do fact shacking people change their minds, i know lot of research that shows the opposite and that in fact once people are exposed to the miss truth that can courts with their believes it is very hard to change their minds, they say will there must be a grain of truth there somewhere, so well it is not facts back and its out there in the info universe maybe thats, true you have to fact check undefeated amount of information that you alluded to, im interested also in the Mainstream Media, the manipulated Mainstream Media making sure that theyre not aiding and abetting that effort, and the organizations to come up with an urgent set of protocols that the voluntary abide by technical problem which is namely that when there is a decision made by a News Organization she reported information going viral, that they dont like to it and in an active way mean theres a screen shot arousing you dont boost, it but theres a lot of ways at the Mainstream Media can decide on their news judgment not to expose people to it, this brings me to my question, for me share or the like but in is a National Security threat and i wonder, a lot of you have been talking about the need to slow down the spread of disinformation when i came in and we were i was offered the opportunity to break the uber driver, why is there not a way to score the quality of information, the person is spreading, it is our way to put a pause before you spread something or share it to say, this thing was checked by putin avaaz to say its not necessarily credible, something to just slow it all down to allow the Critical Thinking that professor lisa fazio alluded to, i know that doesnt go with the to spread and share, but is that something we can add to the discussion, thank you. So three important things, the efficacy of correct the record, in the role of the Mainstream Media and probably getting disinformation as opposed to social media, and then designing for friction and when people think of that, anybody want to take any pieces of that . So in terms of correcting misinformation, by far the best thing to do is not expose people to the false information to start with, ones they have seen in the false information and then youre already in a hole they are trying to dig yourself out of, fact cheques help, they definitely do something, they are affected, if they do change believe, for a while researchers were concerned with something called the backfire effect, that if i really believe it and i corrected i will leave anyone more than i did before, that doesnt seem to happen all that often, very limited, select cases maybe it happens in some situation, on the whole though it is not a big concern, on the whole fact checks how but they dont help as much as if youve never seen the false information in the first place, one of the issues that i think we have to work through with Something Like correct the record is how much interaction with the false information do you need before getting the correction . What you dont want to have happen is showing the correction to people who have never been exposed to the false information in the first place if they hadnt seen that correction, so you want to make sure they are engaging enough that they need the correction that youre giving to them. But if they do need it i think it is a useful thing to have. Can i just jump in on the efficacy . Yes. I will be very brief, total agreement that the studies show that we are actually launching our own academic study and collaboration and i will be happy to share those, ill just stop there. All right, i think it is an important question, we do look at all the options we have and at the disposal to address the spread of these practices, it is interesting to think of the tradeoffs for all of these, for example you mentioned it might have cases where you want to have this happening because someone is isnt distress or wants to call for help, so when we inside products and things like contacts making sure we innovate sources are ways we try to in a way more stopping misinformation getting out, we think about these often, but we do have tradeoffs that are not easy to navigate without causing harm and some other ways. Thanks for another great set of ideas, i run a platform that helps academics with social data and have done so for about ten years, with out oversimplifying, a lot of the things that we talked about today come down to, how do you know when you see it . Some things are easy to do with machines and those are the easy low hanging fruit, platforms like google have got a really good during the easy ones, when things are harder, theyre not as amenable to Machine Learning and they were crier human judgment, the problem is diversity of domains, topics one human maybe expert in all domains but is never smoked a cigarette so they can tell the difference between marijuana and tobacco, so my question, in that context is, you get away from the specifics into the general, bomb a google special was a look at the web and said on the web not all pages are created equal, summer strong in summer week, and lets builds and algorithm that favors these strong over the week, so what we look at is humans and say the same thing, some are strong and summer weekend specific domains, language, topic, tasks, you have to have a way to rank humans the way that google ranks web pages to meet these demons. So that goes back to the question of quality authenticity of sources but now also the difference between humans and machines and how you rank humans, anybody . At a very high level, the only thing i want to say about this is we do not as far as i can tell rank humans, nor do we intend to engage in that, pursuit however we do have safety teams that are acting compliments of systems and make sure that they look that machines are very good, at so we are not suggesting by any means that machines are part of this issue and we need to invest in Significant Trust and safety efforts across all of these aspects of the company. Okay super fast, one, two, three. Very quick. So i just know that google does rank humans, anyone who uses Google Scholar you could see google you know aside the question of google applause and everything around that, but the author ship around that was very much an effort to associate people with different amounts of authority for different terms, lets be honest about that, we are ranking humans, we are breaking our outlook and how sizable people for about, these are really important ideas whether someone is verified on social media route whether they are trustworthy on social media, i might know celebrities who they are but they are still sharing something that is false and the companies that really screw this up so id like to see a little bit more authenticity about what youre doing on this count, my name is alex howard, i care about these issues a lot we arent about and talk radio, we are talking all about technology but we all know very well but these are huge platforms spreading disinformation, wise in that part of the discussion . In the specific question for Michael Chertoff, thank you for your work on these issues, where is the current leadership on the dhs on these issues, why are they here . What did they do wrong in 2016 when they didnt learn the nation when something was happening, what should they be doing right now . Thank you. Yes hello, a question for google, you mention the knowledge boxes, well these kind of fact boxes are dependent on wikipedia, wikipedia editors are anonymous, only a small number of them decide the controversial entries, the entries can get locked after someone dies, if there is a major event, to see yourself as depended on a wikipedia, that is the first search results on many topics, do you feel that wikipedia needs to be fixed perhaps by you buying it and fixing it . I am just going to underscore the point of cable news and Mainstream Media, its just off the hook here and that is a big deal, but there were two direct questions, one to michael and then one back to cuomo. So cable news and talk radio or part of this and a lot of what is going on on social media platforms is this desire to engage with personalities who are driving discussion on a certain Cable News Networks or on top radio in order to propagate things, we have to look at this as an ecosystem, and i can tell you why do you just whether they were invited or not but i will say that dhs, they are working with the states on helping them to raise their level of cybersecurity with machine infrastructure and they have begun the process of trying to put together a strategy for dealing with foreign nations and this information and it gets a delicate with u. S. Coverage because there is restriction about what your government can do and deal with the issue of rating information and the quality of information. As to what happened in 2016 i think part of the problem i wasnt in office then but part of the problem was uncertainty about whether the government should publicize russian disinformation, or whether the factor that would be viewed as political manipulation and i think that is a challenge, i will tell you what the canadians have decided to do, they have set up a panel of senior and this panel in the run up to the election will have the power to judge whether the degree of risk of foreign interference is sufficiently high that it warrants warning the public that this is going on and the idea is to create a more or less neutral arbiter of when you raise the red flag so youre not worried about overreacting or under reacting as a political threat to. I rio quick we its true that some of the week appear content not all of them and i think some people are cardigans the are pros and cons with the kinds of things that are highlighted and we are mindful in the way we build them and the policies around them to make sure that we have resources, but besides that we dont have other plans at this time dedicated towards these sources. Unfortunately we are at times and we have a embarrassment of riches in terms of researchers and a complex topic, this is just the beginning, and its to me continue, to thank you all. applause the federal energy commission, discuss u. S. Energy market, policy, and climate change, and unfit hosted by the Group Resources for the future, this is an hour. applause

© 2024 Vimarsana

comparemela.com © 2020. All Rights Reserved.