comparemela.com

We have an important guest with a limited timeframe, so we are going to get started. I am ellen weintraub, chair of the federal Election Commission and i am delighted to welcome you to digital disinformation and the threat to democracy, information integrity in the 2020 election. A symposium i am cosponsoring with pan america and the digital policy incubator. This information has become a newly potent threat in the digital era. It has weaponize dare most cherished freedoms, some discord and undermined democracies worldwide. In the west we saw russia deploy disinformation as one of its tools, when, in the words of the mueller report, the russian government interfered in the 2016 president ial election in sweeping and systematic fashion. Alarms are being sounded throughout americas intelligence community, in the halls of congress by the departments of Homeland Security and justice, and by many of the experts who have generously taken time to speak to you today. They worn these attacks are ongoing, accelerating, and unlikely to be waged by russia alone going forward. In may, when the committee on House Oversight subcommittee on National Security held a hearing on securing americas election infrastructure and political discourse, i was pleased to be called upon to testify. I spoke then about how the infrastructure of our elections is not just the physical electoral apparatus run by state and local governments, but more fundamentally, the faith that american citizens have in our elections. I spoke about how that faith has been under malicious attack from our foreign foes with Disinformation Campaigns. When the russian government spent millions of dollars to interfere with our election that directly implicated the ban on foreign spending in our elections that is part of the Election Campaign act administered by the fec. They violated the core disclosure mission of this agency, but the problem is so much larger than that and cannot be meaningfully addressed if each of us views it through our own, narrow lens. This information is a fundamental assault on our democracy. On the united character of our United States. Disinformation exacerbates our division and distorts the truth that binds our common experience. The russian chess grandmaster and human rights activist, garry kasparov, a close student of russias disinformation and its main tool, propaganda, puts it this way. Quote, the point of modern propaganda isnt just to diss and form, it is to exhaust your Critical Thinking. To annihilate truth. Well, today we will prove that are Critical Thinking is not nearly exhausted and that truth has not nearly been annihilated. I am so gratified that this event has drawn to the fec one of the largest gatherings ive seen in my time as a commissioner. We have overflow space, standing room only. It is really terrific to see you all here. We have on hand current and former officeholders, scholars, journalists and advocates. We are focused fortunate to be joined by luminaries such as mark warner, and former Homeland Security secretary michael chertoff. A number of representatives from the Tech Community are in attendance including folks from google, microsoft, facebook and twitter. Some of them are on panels, but all of them will have opportunities to ask and answer questions and make comments. I think you for coming and i welcome your participation. In this gathering today we seek to launch a conversation. We have brought to this room many people with many different perspectives. We invited many others. Not everyone could make it, but this is a good beginning. We have a wealth of expertise and i intend to do far more listening than talking. Some of us here today, like me, swore an oath to protect and defend the constitution from enemies foreign and domestic, so it is fitting that we are gathered on constitution day, 232 years after the constitution was ratified, to Work Together to defend the system of government created by the constitution, which is indeed under attack. The task before us is large and challenging. We wont solve all of our problems today, but we definitely wont solve them by working in isolation. We have to Work Together and we have to see the big picture. I am grateful to have as my partners in this effort, cosponsors of this event, p. E. N. America and Stanford Universitys Global Digital policy incubator. P. E. N. America has been a strong and consistent defender of First Amendment values and a leader in spotlighting the risk to free speech and open discourse posed by disinformation. I want to thank them for the vision, focus and energy that p. E. N. America and its ceo and washington director of brought to this joint project. Stanford universitys Global Digital policy incubator is another leader and innovator bringing together governments, Tech Companies and Civil Society to develop policies that advance human rights and Democratic Values. Thank you to the executive director, Eileen Donahoe, for all of your help and insight. I wont take the time to fully expand on their many accomplishments, but one really does bear noting. Working together they were instrumental in the passage of the firstever u. N. Resolution on Internet Freedom in 2012, which lay down the foundational principle that human rights must be protected online, just as they are offline. After getting housekeeping out of the way, my administrative staff asked me to point out to you two exits to this room. Right over here and way in the back. Doing my Flight Attendant routine. In case of a fire or emergency please exit in an orderly fashion and head to the staircase by the water fountains between the two doors. I hope the warnings are never necessary, but i especially hope they are needed today because i want this symposium to bring light, not heat, to the discourse on disinformation. In that spirit i am please to bring one of the countrys leading lights in the fight against disinformation, senator mark warner. From his seat on the Senate Select committee on intelligence he has been at the forefront of the ongoing counterintelligence and debate investigation. In a town torn by bipartisanship, he has worked hard with the chair of the committee to keep the efforts bipartisan. Senator warner is recognized as one of congresss preeminent voices in the ongoing debate surrounding social media and user privacy. He is the cofounder of the bipartisan cybersecurity caucus. One of his bills would write new tough Data Protection standards. Another addresses the use of dark patterns, deceptive tactics that trick consumers into handing over personal data. If it is digital, senator warner is all over it. Im excited and delighted to have him here today. Please join me in giving a very warm welcome to senator mark warner as he delivers our opening keynote. Thank you very much. Thank you for those kind comments. I will also agree that hopefully we will not be rushing for the doors, but if you happen to see a posting on your device, rush for the doors, no question who is behind that. Again, thanks to the chairman weintraub and ambassador donahue for hosting me today and for putting together this very thoughtful and timely event. Actually ive never done this with glasses, so i will try to do this without glasses. Michael, youve probably dealt with glasses. I want to think again, p. E. N. America, which has a long standing standard of Free Expression and public discourse. Unfortunately, like many of our institutions today, that discourse is under assault. It is under assault from dark money, russian trolls, and extremists who exploit social media. Through each of these cases, and others, we see openness, diversity, accountability of the public sphere being undermined. The internet optimism of the 1990s and early 2000s obscured this trend for many of us. But it was a bipartisan consensus that the internet was inherently democratizing and liberating. We based major portions, major policies on this underlying optimistic assumption. Our stance on china and the revolutions. As a result, we now face serious policy challenges. Today, and i say this reluctantly, i would think we need to rethink that optimism. We need to come up with a new set of foreign and domestic policies based on a less optimistic, or at least more realistic, notion of technology and the internet. To be sure, this is a somewhat surprising place for me to come from. For many years, as alan mentioned, i have kind of been known as the tech guy in congress. Like many policy members i show the consensus that these technologies and the companies that built them are largely positive forces. Now, weve seen how the misuse of Technology Threatens our democratic systems, our economy, and increasingly, our National Security. Russias attack on our democracy awakened a lot of people to this truth. We know the United States faces a serious threat in the cyber domain from both state and nonstate actors, not to mention the threat of misinformation and disinformation efforts. By russia and increasingly by those who copied their playbook. As a result of this information, we are finally beginning to have some overdue conversations in the privacy, data transparency and other Critical Issues related to social media. But we also must confront the way that domestic actors have also exploited these technologies. More broadly, our position as a Global Leader on Technology Issues has been weakened by the retreat of the United States on the global stage, as well is by congresss unwillingness or inability to formulate smart policy responses to the challenges we face. And frankly, i worry that this administrations haphazard approach to trade may end up exporting and internationalizing some of our worst app policies. While i am encouraged that governments around the world, including the eu, have begun to fill this vacuum, the need for u. S. Leadership, pragmatic, tech savvy policy, has never been greater. As vice chairman of the Senate Intelligence committee ive spent the better part of the last 2 1 2 years on the only bipartisan investigation into russias attack on our democracy. The truth is, the United States was caught flat footed in 2016 and our social Media Companies failed to anticipate how their platform could be manipulated and misused by russian operatives. Frankly, we should have seen it coming. Many of the technologies relied upon by the russian were not new. The recommendation algorithms. These and other techniques were longstanding tactics of online fraudsters. We even saw an Early Warning sign of this context in something called gamer gate back in 2014. You will recall that gamer gate was a concocted and connected Harassment Campaign waged against women in the videogame industry. It foreshadowed how bad actors could use a variety of Online Platforms to stoke conspiracy theories and in recognized Facebook Groups and other seemingly innocuous oh, you know what i mean, online communities. We also missed the warning signs in the international context. Over the last two decades, adversary nations like russia have developed a radically different concession of information security. One that spans cyber warfare and Information Operations in a single arena. I feel we have entered a new era of nationstate conflict. When in which nations project strength less through traditional military hardware and more in cyber and information warfare. But our adversaries arent necessarily using highly sophisticated tools. They dont need to. They are attacking us opportunistically, using fishing techniques and rattling unlocked doors. In many ways, we brought this upon ourselves. We live in a society that has become more and more dependent on products and networks. Yet the level of security and integrity we accept in commercial technology, in commercial technology products, is shockingly low. Think about the fact we still do not have basic Security Standards for iot connected devices. And as a society we continue to have entirely too much trust in technology our adversaries have begun to exploit. While some in the private sector have begun to grapple with these challenges, many remain resistant to the changes in regulations needed. Lets face it. This may come as a shock, congress doesnt have it either. It is not enough to simply improve the security of our own infrastructure computer systems. We must work in a coordinated way to deal with the adversaries and bad actors who use technologies to attack identified institutions. Internationally we need to develop new rules and norms for the use of cyber and information opportunities. We also need to better enforce existing norms. But norms on traditional Cyber Attacks are not, alone, enough. We also need to bring Information Operations into the debate. In addition we need to build international support. Rules that address the internet potential for censorship and oppression. We need to present our own alternatives that explicitly embrace a free and open internet. And we need that responsibility to extend not only to government, but to the private sector, as well. Truth is, western companies who help authoritarian regimes build censored apps were Walled Garden versions of the internet, are just as big a threat to a free and open internet ask government actors. We need to realize that the status quo just isnt working. For over two decades, the United States has maintained and promoted a completely hands off approach and today, the Large Technology platforms, in effect, the only major part of our economy without a specific sector regulator. For years, we told the world that any tweaks around the edge would undermine innovation or create a slippery slope toward a dystopian system. Instead, our failure to act, the opposite has happened. Many countries have gravitated toward the dystopian chinese model. In part, because we have not offered pragmatic, valuesbased alternatives. And we have seen how laws originally intended to promote this behavior, like section 230, which was meant to incentivize effective moderation, are used by large platforms as a shield to do nothing. Just last year americans were defrauded to the tune of 360 million by identity thieves posing as military service members. Now, as the New York Times reported, these arent sophisticated state actors using fancy marketing tools. They are pretty basic scammers in internet cafes in west africa and the truth is, facebook faces no meaningful pressure to do anything about it. Neither do the defrauded americans, to sue facebook under section 230. To make matters worse, facebook faces no competitive pressures. Section 230 was born out of an era of more vibrant competition on the web and it left the incorrect assumption that sites would pursue robust moderation, because they felt, or the assumption was, that users would flock to other providers of their site became dangerous or abusive places. Obviously that restraint does not work. This is one example of the internet government regime we have convinced ourselves and try to convince the rest of the world is working just fine. Obviously thats not the case, at least in my mind. Instead of dealing with the misuse, these Large Companies have externalized the responsibility of identifying harmful and illegal activity to journalists, academics and independent activists. Rather than promoting pragmatic rules of the road, the United States continues to promote a laissezfaire approach whether it be refusing to sign the christchurch call or including new platform safe harbors in trade agreements. Last summer, i put forward a white paper, helped in many ways. Let me acknowledge a person on my staff, rocky martino, who many of you know. It lays out a number of policy proposals for addressing these challenges. I recognize they intersect with a number of the issues that will be discussed today, so i hope this will get the conversation started. We can start with Greater Transparency. I think folks have a right to know if the information they are receiving is coming from a human being or a bot. I also put forward legislation that would require Greater Transparency and disclosure for online political ads. Companies should also have a duty to identify inauthentic accounts. If someone says they are mark from alexandria, but they are actually boris from st. Petersburg, i think folks have a right to know that. And if a large Facebook Group claims to be about texas pride, but its administrators are constantly logging in from moldavia and those ip addresses, again, i think the users following that group should know the information as well. I have a series of additional bills, beyond the ones ive already put forward. We need to put in place consequences for social media platforms who continue to propagate truly inflammatory content. We saw facebook cut flatfooted in the face of rudimentary audiovisual manipulation of a video of speaker pelosi. This does not bode well for how social media is going to deal with more sophisticated manipulation techniques as we see the era of deep fakes come upon us. Platforms should be granting greater access to academics and other analysts studying this information. Instead, weve seen a number of times where platforms have worked to close down efforts by journalists and academics to track misuse of their products. We also discuss a number of other ideas in the white paper around privacy, price transparency, data portability, interoperability. It is my hope that companies will collaborate to be part of the solution. One thing is clear, the wild, wild west days of social media are coming to an end. To conclude, obviously there is a lot of work to be done. Democracy has been at the forefront of technological innovation. In order for democracy to continue that leadership and preserve our own social discourse, we need to question some of the outdated assumptions about these technologies and put forward policies that are truly consistent with our values. I want to applaud you and your partners for putting together this forum today. I think it will be an important part of this debate. I actually believe there still remains the opportunity to make this an area where there is no such thing as a republic or democratic solution, it is more a question of the future path and getting the future right so we not only preserve our democracy in 2020, but we dont allow these platforms to become the advocates of hate and abuse. It has to be one of the most important issues we face today. I appreciate the opportunity to be here and would be happy to take some questions. Thank you all very much. Questions, comments, suggestions, criticisms. [ inaudible question ] thank you for the talk. I work with academics who study twitter data. Twitter, as you rightly recognize, has made it much harder for academics to get access to the data they need to build the models to catch, in real time, the start of Something Like Kamala Harris destroyed. Can you help us restore that access by creating safe places for academics to work on this data . That is something, again, this is something we have been engaged with with all of the platforms. Sometimes we get a positive response. But then when the details come out, and weve seen that for example in the campaign ad space where there was a requirement that platforms voluntarily agreed to Place Campaign ads in an area where they could be looked at. It is hard to find it and difficult to navigate. In the short term i think the most important thing we can do is continue to try to work with you to bring pressure on to the platforms to be more forthcoming. It is a hard area to figure out how to legislate that access, so i am hoping persuasion can be used, but ultimately if not, we have looked at legislative solutions as well. I hope you get with my staff and see if we can find better ways to Work Together. Hello senator, my name is catherine fitzpatrick. I wanted to ask you, how would you either redirect or remove section 230 . A strict interpretation like incitement to violence, would you have it broader, how would you go about fixing it . Remember, section 230. Sometimes people confuse section 230 with restrictions on free speech. Free speech is protected i the First Amendment. Section 230 is an adoption of Telecom Policies that basically said lets consider these platforms is just being a connection of dumb pipes, so they have no responsibility at all for the content. That mightve been the right answer in the late 90s. In 2019, when 65 of americans get some or all of their news on the platforms, i think it needs to be reexamined. So, and we have seen, the platforms have said, section 230 and weve already seen areas where we have legislated exemptions. Prostitution, sex trafficking, child pornography, bomb making, and as a matter of fact, this is one of the things i hope to come out of these discussions today. Section 230, how we might rethink it. If it was really a factor toward promoting moderation or moving people toward new platforms. I would argue one of the tools we are working on right now is a data portability and interoperability legislation, so if you got tired of platform x, you could easily move to platform why. Following a tool, some of us are old enough to remember when it used to be really hard to move from one Telephone Company to another until we had number portability. So the same idea with data portability. But i am looking for ideas on what it reshaped 230 might look like. In my mind, and this is a bit more controversial. I think, content, and we have already seen the australian and uk move in this direction. Is indirectly related to identity validation. If we are worried about the questions of abuse and misuse of information, identity validation is another approach. Now in america that might not have as Much Negative consequences if you are a political organizer in egypt, then the validation is a huge issue. I actually see 230 and identity, as tied. There may be, if we could figure out what the right balance might be, you might need more of one and less of another, but that is really where i think part of this debate ought to head and i would welcome thoughtful approaches on how we can sort that through. I do think on the technical basis, data portability and interoperability would be a great issue as well, because that would go back to the original premise we have, if you didnt like the kind of information you are receiving on platform a, you could move to platform be. You can do that today. One of the things i would also add. Without getting too crazy, with details on this, but if we do data portability and interoperability, another legislation i have is i find it remarkable that platforms are sing they cant do this, which is total baloney. We should also know what data is being collected on us and what the value is. The idea they couldnt provide that is bogus. So all these things fit together. Here is a little vote button. I really appreciate that you called out the questions having to do with the private sector here. Of course we are at the Election Commission, so we are all very interested in election disinformation security, but in the last few weeks we are sing private sector security incidents with misinformation. Im wondering if you have thoughts about how a private sector Security Response might springboard or enable a better response to Election Security when it comes to misinformation. Great question. First of all, how in the heck did we come to 2019 and think that the protection of our Election Security should be a partisan issue . I mean, would we ever think the protection of our electric grid or our Financial System should be a partisan issue . It is crazy and i would argue that every piece of legislation ive gotten in this area is bipartisan. Im going to answer this piece so, lets go ahead and put a bill on the floor that makes clear, because there may be some people downtown that dont understand this, that if a Foreign Government tries to intervene in a president ial election, the obligation is not to say thank you, the obligation should be to tell the fbi. Second, we have to make sure because we can improve the security of our voting system. At the end of the day, the chairman knows in many ways the integrity of the system relies on peoples confidence. One of the things we can do to improve confidence is make sure there is a paper ballot backup for every polling station. Makeup, number three, that we pass the honest act act so we know who is advertising with the same Disclosure Requirements that you have on television or radio. Number four, lets go through some of these issues around identity, section 230, basic rules of the road for social media platforms, so again, we have, i think, better protections in recognizing our elections are to be decided by americans, trying to influence each other and if there outside forces in terms of security concerns, this is an area where we have been extraordinarily lax. The fact that there has never been a notion of a Liability Regime around software . Im not saying we need to head there but it is a pretty interesting commentary, the fact that we are literally i think we will get through this year. United States Government is purchasing annually billions of internet connected devices and we have not put in place a minimum security standard for those devices. The amount of cost it will take to rip that out if we find vulnerabilities downstream is extraordinarily challenging. This goes into the private sector, i am not sure the United States alone can fiber norms. We need an international framework, the same way we said certain things around chemical weapons. There are tools, Cyber Weapons that also should be inappropriate. If nationstates or others use them, then attribution requirement and the ability to punch back could be elevated. I think the really needs to be an engagement with the private sector and this is been an area that has been for too long an afterthought. The potential vulnerability that we have is abnormal. I keep thinking of the Nuclear Power plant where we spend millions of dollars for tens of millions of dollars protecting the security against a cyber attack and the me attack the microwave in the staff kitchen. The Community Needs to be more engaged. This is much more fun than what i am doing on the normal floor of the senate these days. I will take two more and then get out of your hair. It is nice to see you again. Thank you for working on these issues. I have four questions. Should the information quality act apply to government webpages and Government Social media since we have an issue with disinformation panels. Second of all what you think should be done with respect to political campaigns using in this next bit. Could this be accomplished . Or an agency that takes Business Fund campaigns going through and pursuing some of the aych could try to give you an answer. Im not sure i know what the information quality act is. Shame on my bad staff. Let me have a little knowledge before i try giving the answer. This is a really i would hope in terms of standards of savior within the political process we could start with an agreedupon code of conduct that campaigns will voluntarily agree to. I think in this area starting with i hate to say industry self regulation, but some front end that is in a voluntary agreement and potentially back in terms of malfeasance, whether ftc or others in terms of inappropriate business practices, it is debatable. We need to start with agreed upon code of conduct. Im not even sure all the Democratic Campaign a few of them have, you havent. I think that is pretty remarkable that there has not been that agreement in terms of what we would all agree would be malicious behavior. In the question we have is is one thing to use tools to generate interest in followers. How we draw that line between using sophisticated techniques in a legitimate way versus the absolute misuse, that line needs to be more clearly drawn. Do you want to pick the last one, alan . My name is michelle. Im the executive director of the national station for Media Literacy education. First i want to say thank you for having this discussion. I would say we saw this coming. A lot of educators and scholars , so we are so happy that these conversations are happening at this level. I am taken by how often i am in conversation about this information that does not include education. I dont know how we solve this problem without taking a very hard look at our Education System and how we are preparing our students to succeed in this world. I dont want to be negative but i think we are doing a very bad job at it. I want to get your take on how we have more in depth conversations about how we are educating the next generation. First i want to commend your work. Number two, i think a basic component of k12 education ought to be Digital Literacy and the ability to have some kind of basic idea of how you spot inappropriate content. I would like to think there is much we can learn from europe were a country like sweden for example, rather than trying to put legislation in place, i dont know how well they did on their education of their country before because of their fear of russian intervention, i think there are lessons there in estonia which is much further along, particularly in terms of identity validation questions. There is a nato back center that is in the space and i believe there were tens of not hundreds of millions of dollars that were going to be dedicated to that. It may be part of the great grab of the 3. 6 billion taken for the wall. Im not sure that is making a safer and taking american commitment away from that kind of international collaboration. I think we have to get our practices, rest practices, i think we need, just a we have seen a Civics Education happen at a state level is a former governor, i think the idea of you guys are other groups advocating Digital Literacy and k12 curriculum is important. One of the things, two last comments. I would argue our failure to act on these issues around disinformation and misinformation is i think indicative unfortunately of a larger retreat from our country, not just this administration, of america being willing to innovation and set the standards of the protocol. I would argue sent sputnik, every major technological innovation, if not invented in america was invented in the west and we set the standards. Increasingly we are walking away from setting the standards. In the case of social media we have seen the eu step up, the uk, and australia on content and privacy. Individual states. We are also seeing this in a host of other areas. The standards on to set standards on 5g which is radio and television, if those standards were set by china, that is a huge security concern for us. That is a very different mindset about how to use the Technology Tools to build a surveillance state that would make orwells 1984 blush in terms of how much more sophisticated it is than that. It is a real challenge and i dont think as policymakers we talk enough about it. My final point, which is and i say this to my friends on the platform company, i feel there has been a general feeling from the Platform Companies of playing rope a dope with the congress. We are interested, but will we get to the details we are not really interested in those details. Anything that affects our Business Model we are unhappy about. At the end of the day that will be a huge business mistake. We are not going away. All that is happening is the eu and individual states are simply raising the floor. When we do actually legislate you will end up with a much tougher regime than you would if you actually worked with us in a collaborative way on the front. I think you could end up with a model that would be much more detrimental and at the end of the day one of the reasons why i have been concerned and some of my folks were upset with me, these are at the end of the day all International Platform and i have been worried about simply replacing western ones with chinese ones that have no protections, at all. I appeal to those of you from the Platform Companies. You need to come to the table. We would like to work with you and we will occasionally let academics see small pieces. You need to be a part of this conversation has we are not going away in the wild west days are over and we are candidly one significant event away and it may not even be in the realm of politics. The ability to have market manipulation frankly is greater than the effect it could have on elections. We are one significant event away from potentially congress overreact because the next event could be extraordinarily traumatic. I think all of you for being here and i want to commend ellen and everybody at the fcc and partners from panamerican stanford for this kind of gathering. It is so terribly important and back to the other question. I dont have a lot of this figured out that i look forward to the results of todays discussion. Thank you all very much. That was really great and inspiring and is going to be a tough act to follow but we have some really great speakers to come. I want to say we got some people standing and we over reserved to be a little bit cautious but people can take those seats. I have to remind everybody if you have not noticed by the plethora of cameras in the room that the symposium is a public event and is open to all and is being recorded and live streamed. The record will be made publicly available. If there are discussions pending Agency Rulemaking they may be rulemaking record. The wifi password is posted and is on the store and a couple of the pillars. Im sorry the print is a little bit all. Restrooms are across the hall. That is all the housekeeping matters. Our first panel is on how disinformation and new technologies affect the way people think and what we are learning from the global experience. It will be moderated by Eileen Donahoe and at this time i will ask the first panel to take their seats. Take it away, eileen. You for coming. Let me start by giving a great big thank you to ellen and suzanne for joining forces in this event. It has been really great to team up with you both and they really need to get the big iq to staff from both the fcc and panamerican, especially who did the lions share of work. I want to say couple of words about the goal of this event from my Vantage Point and then we will talk about the opening panel. The goal of this event was basically to help generate Greater Public interest and greater political will to combat disinformation in the lead up to the 2020 election. We see the disinformation threat is a bipartisan issue as senator warner said. The disinformation threat is simultaneously a National Security issue, a Cyber Security issue, and a threat to our democracy as you just laid out. The animating energy for us today was our shared sense that there has been an inadequate level of public outrage or official response to the foreign disinformation threat. At the same time we recognize a big part of why the disinformation threat is so confounding is the domestic factor. The bizarre ways that foreign disinformation mixes with authentic civic discourse, domestic video and with a speech of our own elected officials. For expression, axis two information are supposed to be the lifeblood of democracy. Free speech is supposed to be the most highly protected form of free speech but it is being turned against free society and undermining the quality of discourse and confidence in our elections. The confounding part of this threat is that most disinformation around election is either false or illegal. It is often a combination of factual information that has political framing to give the fax emotional punch. This framing does not necessarily turn these facts into falsities. Even more so, sharing false information is generally not illegal in a democracy. Given First Amendment doctrine, contentbased restrictions on the base of the falsity simply will not pass the congress or u. S. Supreme court. There may be some limited carveouts for false information related to procedures, etc. , but we will hold that nuance. The basic point here is protecting the integrity of information around elections, which is different from infrastructure security, data security, security of campaigns. Those are really important issues. Our focal point is disinformation. It is a much more nuanced and complex thing challenge. Many different stakeholders impact the quality of civic discourse and many stakeholders will be needed to combat this threat. To the private sector, Digital Information platforms on social Media Companies do have a unique and substantial role to play. Given how they affect speed, l, and amplification of disinformation, as well as the very complex Cross Platform dynamics that they know better than anybody else. The big question is what do we want private sector digital forms to do . What is the responsibility in protecting Free Expression. As senator warner just said, robust moderation is a good idea. But, we need to tread really carefully in asking the private sector to do what we do not want governments to do. And that is to take down content based on a political assessment of truthfulness. That is what authoritarian governments do and we do not really want to go there. From my Vantage Point, if we push privatesector Tech Companies to undermine our own Free Expression principles in the name of protecting democratic processes, we actually end up serving the ends of actors, whose goal it is to erode confidence in the feasibility of adhering to our own Democratic Values. Ultimately in democratic governance, itself that said we cannot sit idly by and undermine the importance, the equally important right to democratic participation. This is the tension we are here to help resolve. There is no single obvious lever or combating disinformation and the bottom line is this is an all hands on deck moments. We need a comprehensive societywide approach. Our hope for today is to flesh out the nonregulatory solutions , assessing some of the tools that are already being utilized by global stakeholders, and facilitate greater cross sector coordination. Probably most importantly, bring the public into this conversation and help build civic resilience, which goes to the comments made at the end of the last segment about the importance of Media Literacy. To the opening session, our goal here is to place the u. S. Election integrity challenge and a global frame. We need to see how the u. S. Election threat is one very important data points in a much larger global trend. We have an embarrassment of riches in terms of our speakers and they will each do a short presentation and look at how Digital Technology is being exploited globally. We will hear about some Brain Science behind how humans process this information and we will talk about the range of counter disinformation tools that have been tried around the world and assess what may be transferable to the u. S. I will briefly introduce our speakers. I will start with the honorable michael churchill, he now cochairs with former Prime Minister of denmark, the Transatlantic Commission on Election Integrity of which i am a member. We have lisa fabiano, an assistant professor at vanderbilt. She conducts research on how people process fraudulent information and peoples perception of truth illness. Camille frangois is chief innovation officer and she leads their work to detect and mitigate disinformation and media manipulation. Simone wolf is googles Global Public policy lead on Information Technology and he advises on products, engineering, trust and safety team. Susan ness, former commissioner, a distinguished fellow at the annenberg Public Policy center. She chairs the transatlantic highlevel working group on content moderation online and freedom of expression. Both michael, camille and i are all participating in that. Last but not least, nate miller is legal director for and he will speak about the counter Disinformation Campaign ahead of the 2019 european parliamentary elections and see what lessons we can learn. After they speak, each for five or so minutes, we will turn to all of you for your questions and comments and experiences and we will start with thank you, eileen. Thank you madam chair and everybody else for sponsoring this event. Just to dive right into it, if you look at Information Operations during the cold war which are efforts to use information to undermine the efforts of your adversary, you quickly recognize that this is really a domain and geopolitical conflict. In fact, there is a reference to this concept in the so called doctrine. Based on the speech some years back but the chief of the russian general staff, manipulating the minds of your adversaries so that you undercut their ability to resist. Essentially you disrupt the unity of effort. I should say that this is not just about elections. The effort to use and weapon eyes information for purposes of undermining your adversaries is more probably applicable to any element of democracy and freedom and the idea simply to paralyze your opponent and to make them shrug their shoulders and give up. This is not new. If you go back 100 years to the early stages of the soviet union when they formed a common from the very beginning, the ideology of communism started to propagate confusion and to undermine morale of your adversary as an element in communist efforts to dominate the world. So during the cold war we saw propaganda efforts to manipulate party populations in order to achieve results that the soviet union wanted to do. We have seen it even in the if you look at what happened in europe in the early part of the century, the first few years, what you will see our russian efforts to use information manipulation, agents of influence, and money to drive behavior in a way that would favor parties or politicians that were viewed as per russian and disfavor those were viewed as antirussian. If you look at who got funding from russia in the first decade of the century when her Rightwing Party was experiencing financial difficulty, that is one example of this kind of effort. Frankly, the use of investment by russian oligarchs in europe as a way to propagate and influence the discourse in a way that was favorable to russia was another dimension of this kind of Information Operation for active measure. So what is different now . Why are we so focused on this . I would a there are a couple of elements that have changed. They have not undercut the basic thrust of information or disInformation Operations, but they have made them much more dangerous and difficult to control. First of all, information out there as a number of different sources as we ramped up over what we sought even 10 years ago and certainly 50 years ago. The ability to use all these different avenues to influence what people read and what they hear and to drowned out voices that may be inconsistent or contrary. Some of us are old enough to run for the days when there were three Television Networks and you had Walter Cronkite and jenn holloway and they basically tried to balance. They were not perfect. Those were the choices. That is very different than what we have now and anybody out there to be a validator. That is one element. Social media is also been amplified by using Data Analytics to micro target particular audiences. You dont have to speak consistently to everybody. You can make sure your message is specific to people or just didnt things. There are other elements that increase the risk. One is the fragmentation of the Mainstream Media and the effort now to drive revenue by getting people to turn onto your particular media outlets. It means there is a tendency for the Mainstream Media to amplified messages on social media. In fact in many respects what you see on twitter or face book is really just the indication to get Cable Television or talk radio to focus on a particular plot line and then to amplify it and propagate it for the viewers. Finally, the theft of data, and the release of that data allows people who want to have a particular agenda to get access to information in which they can either publicize or distort in order to drive a particular message. I should also say that is not just the russians that are report you see how blatant the efforts were to get involved in the 2016 election. We have seen influencing operations coming out of chana china. More important, we need to be honest. It is us. We are doing a lot to promote the disinformation ourselves. Sometimes we are doing it at the behest or encouragement of foreign actors, sometimes were doing it on our own because of various extreme views which foreign actors than amplify. But it is not just the adversary that is doing it. It involves our own citizens, many who take extreme positions , and then exploit these techniques and these technologies in order to propagate them. I conclude by saying there are two things that are on the horizon or maybe even closer than that that we need to think about to take this to the next level. One is the use of socalled artificial intelligence. The amount of data that is out there about what people are interested in, which is critical for microtargeting is so vast that no team of human beings here in st. Petersburg or beijing could analyze it in art real time if they did not have artificial help. That is what it is about. For those of you or following Cyber Security, you know that the chinese in particular have been accumulating vast treasure troves of data about citizens in the United States. Some of it is stolen, some of it is obtained legitimately. All can be correlated and used to allow somebody to target particular people who may be susceptible to different messages. This challenge will only become more acute. The other thing i would say is we are talking about disinformation to affect the election. I dont know that i am as concerned about efforts to keep people from voting on one candidate and another. As much as i am encourage voters to support another candidate. Heres an even more challenging question. What happens after the election . Lets say it is a close call and lets say there are disputes. Take your mind back to 2000 with the bush core campaign. Now imagine that occurs again or something similar and you then had a concerted effort to drive disputes and disbelief about the outcome of the election. That could affect not only the Public Confidence in the outcome but it could actually operationally affect the ability of u. S. Government to function over a period of months, which of course would be a wonderful gift to our adversaries. We need to start thinking now about what the ways are that we can validate and adjudicate the accuracy of Election Results so that we dont have 2000 on steroids. Thank you. Thank you. I will lay down a couple of markers of things you said that are really important. The geopolitical dimension is not new. Disinformation is not new but they are new dimensions related to social media. You highlighted the importance dimension of boxing and professional media reporting on dock material. Most importantly i think the disinformation is not always about teaching peoples voting, its about suppressing the vote and or eroding confidence in the election outcome, which could end up being the biggest problem. Now we will turn to lisa fazio in here about how disinformation is processed in the human brain. I should have slides that are popping up. All right. Im a cognitive psychologist. I study how we process true false information in our brain. What makes things memorable and how do we decide what is true and whats not. What i want to talk about today is why his information is a problem. Smart people, why cant we realize that something is all not have it affect our belief . What is it that it still changes our minds some way . To start but my little professor had on and it will ask you questions. Audience participation. You have to yell out the answer. In the biblical story, what was jonah swallowed by . Big fish, well, depends on your translation of the bible. How many animals of each kind did noah take on the ark . Most of you yelled out two even though all of you know that it was noah and not moses who took the animals on the ark. And so, this is something we call knowledge neglect. We often have relevant knowledge in our heads and yet we fail to use it in a given situation. We often fail to notice errors in what we read or hear and those errors can affect her later thoughts and belief. So imagine your friend told you some new interesting fact and this might happen in person and now it is likely to happen online or in a social media feed. How do you decide if what they told you is true or false . There are at least two ways you can go about doing this. You can think through your prior knowledge of if it makes sense given what you already know about the world or you could use a quicker faster way of going with your gut, does this feel true . We know in a lot of situations what humans do is take the easier quicker path and go with their gut reaction. In fact over 30 years of Research Shows that we use our prior knowledge to determine truth we also rely on a lot of other cues and one of those cues is how easy it sentences to understand or process. Fluency with which you can comprehend a sentence. One of the big things that increases that fluency or processing fees is repetition. The more time something is repeated the more likely we are to think it is true. Like i said hundreds of Research Studies have shown that repeated facts are actually more true the things we have only heard once. In fact, this happens even when you have prior knowledge, so we have done a few studies looking at among people who know that the cyclops is a legendary one i giant in greek mythology. If they read the sentence twice, that the minotaur is the legendary one i giant in greek mythology, they think it is more likely to be true that if they have only heard it once. That repetition is increasing your belief in these false statements, even when you have the prior knowledge. People have also done Research Using typical false news headlines. So proand antitrump and clinton take news. When you repeat it, people think it is more true. Most interesting to me, this happens irregardless of their political belief. So whether the statement matches or goes against your actual political belief, you still think it increases in truth with repetition. So why does this happen . One of the big reasons is because its really effortfull to consult our prior knowledge. We go with what is good enough or close enough to what we think is true. Our brains are amazing confrontational machines. Even the best ai can barely do what human brain does in a millisecond, like lookout into this room and wreck nice the scene and everyone in it. But our brains are also really lazy. I dont like to work. They will take shortcuts whenever they can and one of those shortcuts is when something isnt good enough or close enough we assume it is true and move on. So that makes it really difficult for us to notice errors in what we read and hear. How can we stop it . What can we do . Prior knowledge isnt enough in and of itself. I showed the minotaur example. Even though you know it is the cyclops you think it is more true when it is repeated. It does help if i had asked you how many animals of each type did nixon take on the ark . You all noticed the error. So there are limits. The big thing that seems to help is really thinking deeply and critically about what we are reading. Taking a second to pause a, how do i know this is true. Where is this coming from . Things like that can help. Giving people an accuracy focus and accuracy norm is useful. Having people think about how do i know that this is true or false . So in one of his fireside chats, roosevelt once said the repetition does not transform ally into a truth. And while that is true on the face of it, it cant actually change the actual truthfulness of the statement. He is wrong in terms of what it can do in our head and in our mind. It turns out the repetition does have a strong impact on what we read and believe. Thank you. [applause] hopefully we will come back to your point about how we build resilience and deeper Critical Thinking and accuracy norms. This again goes to the question we had earlier about Media Literacy and how to build a more effective program. Were not going to turn to camille frangois. Thank you, amy. I want to talk briefly about threats and the global for higher market of disinformation. When we think about foreign threats to electoral integrity, many of us think about russia and i think theres a little bit of russia fatigue almost. I do want to talk about a little bit because i think it is a good marker. Senator warner said this morning that the u. S. , we are really my question is, what have we learned now and what we know and what do we still ignore . In looking at the very details of the russian interference against the United States and other nations, there were people working with the Senate Intelligence committee in 2016, 17, and midterms in 2018. And theres something perhaps surprising that i want to share with you today, which is theres a lot that we dont know about what happened in 2000. All of the headlines and for all of the data shared, for all the wonderful academic reports, for all of the discussions of the hearings, we still had major winds thoughts in truly understanding how russia targeted United States and we have to learn from that. I just want to give you two examples. A lot of people think about the ira based in st. Petersburg. People on staff creating messages to target specific communities in the u. S. Across social media platforms. The ira was not the only russian entity. That was involved in producing and disseminating information on social media targeting americans. The Russian Military intelligence details and data on what the ira did. We dont have a corresponding collection for what the cru did. That matters a lot because the cru is the more persistent actor. It is easier to dismantle the ira then the tru. The g are you is responsible some of the most complex techniques that we saw. So where you hack someones email and you package it. Then you make it into a leak. As of the techniques core of our own Network Public fragility. It attacks us exactly where we democratic processes. It makes it complicated. To look at that and say are they actually coming from foreign how do i handle this . That is one example of a blind spot. Another one that comes to mind is the amount of targeting messaging. When i worked with activists who were on the receiving end of the Russian Campaign in 2016, a lot of the targeting was done through direct messages. We had never seen any of this. We only had a sense of how much of that activity happened. And it was so much more insidious. In some ways it was much more powerful than some of the public posts that was analyzed at nausea. So we have to recognize that some of these techniques frankly we have not publicly discussed. It is a key tool in targeting the media often targeted by foreign adversaries. So that is it for russia. I promise. I know people do want to talk about russia all day every day. The other thing i wanted to say on the adversary side is of course there are other foreign players in the game. Sometimes i hear other people are doing what russia did. This is historically untrue. It is for us to recognize this but others have been doing this for other longer. I think it is a testament to how much we were heaved started targeting the u. S. On social media early as 2013. This was us waking up to the fact that a series of foreign adversaries has been using the techniques to target our conversations for a very long time. We have seen a lot of details, on the recent Iranian Campaign and we are seeing the first data points on how china is using social media to target the american available on saudi arabia and the fact that they have built this apparatus. These foreign actors have their own preferred techniques, preferred targets and communities, and it is important to analyze them separately and together to try to understand what are the telltale signs they contain when they come from here or there. Adversaries are also engaged with us in a catamounts game. Sometimes it was interesting around midterms in 2018 where he saw russia, and specifically the ira come back again if we were given away to assess how much they had progressed. They were better at hiding their traces. So the tactics were new and frankly more videos. This is not something we can detect once the takedown. This is something that we are going to have to engage with in the long run. A little bit of a catamounts game. That being said, we have to be very straightforward about what we have learned and how things are evolving. Very quickly on the present of a for higher markets, this part was a little bit more fun. A bit depressing but looking at old that you can Purchase Online and people are selling is quite disturbing. The four higher market of this disinformation is growing every day. It has small players, people who hack into other pages and tell them, and a very large mercenary like troll farms, individuals, and it is global at this stage. It used to be more domestic than what it is now. If you look at a country like the philippines it has a strong industry of farms and what they are reporting is international and global business. What do we do about that . There are two important legs to attacking the problem in the first is detection. We work very hard on better detection techniques, including with our Silicon Valley partners. The idea is to send find a forensic signal to see if someone is manipulating the public discourse. Sometimes it can be through bots if you dont have a good budget. If you have a little bit more budget perhaps you can do more subtle and complicated using actual farm. This will not be enough because frankly a lot of this for higher market is close to what and marketers are doing. And what technologies are being developed. There is a gray area. Heres an example. In 2018 we saw a candidate suddenly was pushed by exact same thing at the exact same time. It created massive chaos in Silicon Valley. The candidate must of hired troll farm or maybe has hired bots and something is wrong. This is disinformation and we have to take it down. And they had built an app and their supporters have downloaded the app and granted access to their own social media accounts and had agreed to participate into this one push contain. They have downloaded the app and full part of something they had agreed to. They gave Silicon Valley a pause saying, what do we do with that . Is that okay . Is that coordinated authentic behavior or is that how people will be containing because of our lack of serious dialogue on what we are willing to accept our social media or not, we will find increasing amounts of gray area situations like that as we head toward 2020. I would really encourage the serious conversation with candidates and parties and pr firms on what is an acceptable practice on ocean media and what borders on disinformation and what is simply a modern and creative use of digital tools. I think without that we are bound to have very complex and Difficult Conversations that will not help us in our institution. So the two big rings here, i would keep this in mind how worried are you about coordination between all of those adversaries. Russia, iran, china, saudi arabia. Will they be coordinating . For all of us we have to be thinking about the norms of political campaigns and what counts as an acceptable political strategy, would we want campaigns to be doing and how they use versus what counts as inauthentic coordinated manipulative behavior that we want to stop. That is a really hard question. I think we will turn to simone. Maybe you can help answer that question. Thank you and good morning. Can you all hear me . Ice is this better . Yeah. Good morning and thank you. The comments so far have been very insightful. Obviously for youtube, the challenges raised by malicious actors who would try to use the platforms to deceive our users and harm them, our mission which is to connect them to useful information, but also to our business interests. These are the kind of challenges you have to deal with is the early days of our platforms. More broadly the way in which people try to either elevate their content in ways that were inauthentic for the purposes of scamming or making a profit. Crosslinking websites and so on. That the challenges raised by disinformation when it comes to the functioning of democracy are the top of mind for us and they have, we take that extremely seriously. I would add to the we take them seriously during but not only during elections. We try to have responses that extend beyond the scope of civic events and precisely because it can impact those that run the information wait for elections to begin to try to issues and they use quiet time to plan their efforts and plan ahead. We try to stay ahead of that. Very briefly i will go into the high level approach as we deploy across multiple platforms including google and youtube. We thwart these efforts with the understanding that point in which our job is done and each time we do something new the other side does something different, as well. Then i will go into some of the emerging threats we see around the world and how we try to stay ahead of those. Im happy to dive back into these points during the q a. In terms of how we approach these issues at the company level, we have three major types of in addition to collaborating with others in the industry and educators and basically our products. The first we try to address the challenge is by designing a system so they make quality count. By this we mean they try to form at the algorithmic level an understanding of what sources are authoritative on an issue and elevating those in response to researchers were over the course of their experience. That is also as it turns out what our services were built on to begin with so even though we dont have a perfect keep innovating. The Google Search made more than last year so it is not a done deal. This is something we have been working on for quite some time. The flipside of that is of course when it comes to elevating one angle, we try to understand the recommendations and what constitutes reduce the spread of the content and the recommendation on youtube. That is of course it is trying to understand what kinds of factors and behaviors are malicious actors going to use to try to deceive our systems and to gain them. To that end, we have had for quite some time now, and policies, rules of the road, sorry, that provide a sense of what is permitted and what is not. The systems are automated and human teams try to catch bad actors that would try to on these policies. If you had created a piece of content that you want to propagate very fast, you would try to gain our systems in that way. It turns out looks this referring to. Working these inauthentic forms is at the top of mind for us. We also have policies about the civics that are harder to catch and invest a lot of research into and have teams looking at the representation, that it is not okay to present ownership or to impersonate another channel on youtube. So that is malicious actors. The third layer is to provide context as often as possible by providing them a navigation with the information that will about what they are looking for. These things are things like information panels and begun youtube you would see the panels on google and on youtube that show you whether broadcast channel is you see those with breaking news on youtube. We also have ways to provide you more perspectives or holistic full coverage function what we have on an issue, not personalized and explores available to show you. Those of three ways that we try to counteract on our products and it is always a work in progress. We double down when we know elections and other types of similar events. We know we are likely to see more attacks or efforts to thwart our systems. It has been interesting to observe over the past few years as we have seen variations of these challenges in numerous countries around the world how much the local specificity of each society in which we operate a simple example is the notion that in many places around the world Group Decision apps are a significant part of information discovery. In ways that are way more pronounced than here in the United States. That changes what vectors of attack for malicious actors. It still means we try to get to our platforms delivered different point in the process. We have to be mindful of that as we expand into each new country. We tried to stay ahead of their goals. It does not mean we have to start from scratch every time we do have to be mindful the last thing i will say is beyond those two local specific needs, there are two other flags for this group. One is there is a rise in concern around media, whether those are ai generated for a new challenge for us. Youtube is seen from its very early days individuals manipulating the content of videos by editing them and splicing them and so on to deceive users. It is something we are mindful of because it is quite Cost Effective for the malicious actors who use them and he can have some traction. We do have policies around those to make sure we can work them and we have systems trying to catch those but it is at the top of mind for us. The other one is as large platforms get more friction for the operations of the various actors, it might be for them to give up their own spaces in which they cant reach a smaller audience like can reach scale faster and with less friction and then try to come back to larger platforms. That is not something we have observed widely of this point but a couple of the many ways we try to look at they do next . And how do we stay ahead of that . And trying to stay ahead of threat actors by looking at what they do and understand how it may our systems. Two point i want to underscore how hard it is to delineate what now counts as an election contest because the time raymond the stuff that happens way before and stuff that happens after all matters to the integrity of the election. The other question i would ask, this point of making quality count and elevating quality, what kind of resistance you had to that approach from domestic political actors. Who may resist your assessment on the basis of quality. I guess we will now turn to suzanne. Thank you, eileen. If you begin with the promise that freedom of russian is fundamental to democracy and government mandated removal of content is deemed false and deceptive, it is a challenge for democracy. Western democracies are stymied and how to tackle content that is odious and manipulative and is disseminated with the goal of dividing society and destroying our faith in institutions but it is not illegal. Most of the lessons that we have learned and the group that i formed has learned in looking at civic laws and regulations and proposals and initiatives, public and private, really point and what not to do. It is hard to come down to what actually will work while protecting freedom of expression. Governments often lumped together, he speech and information, viral deception, in an effort to regulate platforms per se. Some european Member States in putting the uk, at least for the next couple of weeks, and france, propose new regulatory regimes using models, either Financial Regulation or broadcast regulation. The uk white paper would anoint a new regulator who would not police truth on the internet that would mitigate the harm caused by disinformation and on my manipulation. And that constitutes threats to our way of life. It purports to focus on platform behavior rather than content itself, recognizing it is impossible to catch all harmful content. Platforms would go and ill defined duty of care to the public and they would be evaluated on whether they have taken proportionate and proactive measures to help users understand the nature and reliability of the information they are receiving and to minimize the spread of misleading information. They will be subject to a code of practice requiring them to beef up transparency and include clarity around political advertising. We are seeing that is a major theme across the board. Cooperate with Fact Checkers and boost authoritative news making reputable Fact Checkers less visible to users. All of this is good and the focus on harm to society we clearly have a Chilling Effect on free and platforms are more likely to remove more content to avoid heavy fines. The amorphous duty of care might lead to proactively monitoring legal content which again is very troubling. The Uk Parliament is expecting to address legislation in the fall that will set up a regulator to address illegal content leaving the question of legal content that is harmful for a later day. In france the enacted a law in 2018 to address relation of Information Online around elections that imposes strict rules on the media for three months leading up to an election and gives the authorities the power to remove fake content spread by social media and block sites that publish it. It requires platforms to publish the amount of money that was involved, candidates can sue for removal of contested new stories and importantly, judges will have 38 hours to rule, but it does go through a judicial process. Last may the french government also issued a paper proposing innovative regulatory regimes that would focus on platforms behavior, again, not content. It would examine transparency, terms of service enforcement, and redress for those who were harmed. It avoids regulating the content of self. The European Commission has put forward a package of activities to address disinformation in the run up to the European Union Parliament Elections this past may. It expanded Digital Literacy and supported quality journalism. It elevated Fact Checking and provided a network for Fact Checkers and it also promulgated a colder practice. The commission itself has battle between regulating content and the notion that it does not want to be a ministry of truth. It is an impact of how they lived under communist regime and in parts of europe. The code of practice is a self regulatory measure. It applies only to platforms and there are only a handful of platforms and advertising trade associations who actually signed on to the code. It encourages transparency, literacy, Research Access to data and add transparency. Because of the acceleration of closing accounts and labeling bots and prioritizing relevant reliable information, it created greater cooperation between platforms and the eu government. It still however, the eu is talking about regulating the platform and will more likely introduce legislation once the new commission is impaneled this fall. They are looking at the Digital Services act, which would also address the ecommerce the eus version of the safe harbor for platforms. In conclusion, basically there is in looking at efforts that have been taken so far, apart from some of the nordic and baltic countries, which basically have avoided content regulation in favor of public education, Fact Checking, and encouraging quality journalism, basically there is no Silver Bullet to address disinformation in a manner that is true to freedom of expression. The consensus however is that transparency with respect to political advertising is something that needs to be pursued and it is best to address behaviors and actors as opposed to the content itself. Thank you. That is great. I want to get the audience thinking after our next speaker. He will ask you to come in but to really grapple with this question, susan raised about how hard it is to craft regulatory response that actually works in combating disinformation but does not undermine Free Expression and also what you think about this distinction of not regulating on a constant basis for going after manipulative behavior. Does that distinction make sense to you . So reflecting on that. Our last speaker is david miller. Thank you. Can you hear me . Okay. So i cannot ask for better introduction than that because im actually going to talk a little bit about content. I want to talk today about policy solution that i think can help us bridge the gap between the Democratic Values of freedom of expression and ways that we can combat disinformation and i will talk about that in the context of some of the work that did in europe in advance of the elections for the european parliaments. Briefly for those of you who do not know us, we are a global civic advocacy and organization and have 53 members from every country in the world we are exclusively funded by those members and they take a very active role in helping us to do and what issues we work on and what campaigns we launch. Our members are deeply concerned about the threat that disinformation poses to our democracy to the extent that they funded a Large Program that we launched in europe. The european to combat the trolls. Our team of elves, and i will not labor the point, but we looked at and discover Disinformation Networks and European Countries that we reported our findings to facebook. Facebook took action on everything we reported but the post and groups and pages that facebook took action on, we estimate those post and pages and groups reached 750 million views just in the three months leading up to the european elections. I want to pull a couple of important things out of that. The first is any reports you may have read about the demise of disinformation in the electoral context has been greatly exaggerated. Second, want to mention that there are ways they can do detection and cando reporting. There are ways to optimize that. The top one that i would say would be difficult for us to get facebook to take action in a timeframe that was relevant to the Upcoming European elections. We were able to do that because we have a certain measure of ask us, relationships, and who were able to bring a certain amount of pressure or a threat of pressure to bear and that was too high a bar. There needs to be better and more open and easier methods or Civil Society to report information they found get that taken care of. The most important thing that i want to pull out of this is we dont know how many humans however many tens or hundreds of millions of people saw this toxic content, this disinformation, the vast majority today as we are sitting here do not know that they were duped. We are not currently doing enough to counter affect the effects of disinformation that has already gotten out there. Solution that i want to offer us today to start thinking about how to do that is correct the record. It starts from the very simple premise that of the tens or hundreds of millions of people who saw the disinformation content in europe, really the only entity capable of reaching them incapable of letting them know that they were duped are the platforms themselves. And so correct the record is a very simple idea that one piece of content has been disinformation by independent Fact Checkers, then the platforms can and should let each person who has seen, liked, commented on, interacted with, shared that piece of content, that it was disinformation. It was false or misleading. This approach has a few advantages. First and foremost, it is not censorship. There is no ministry of truth. We are asking you to go to the private actor nor government to determine what is true or false. There is no censorship. People are free to post and share whatever they like. We have seen Exceptional Results between control groups and groups exposed to corrections. We have done qualitative research. j

© 2024 Vimarsana

comparemela.com © 2020. All Rights Reserved.