With the victims and their families. The motive behind the attack is not in question. The terrorist had written an extensive manifesto outlining his white supremacist, white nationalist, antiimmigrant, antimuslim, and fascist beliefs. His act was horrifying beyond words, and it shook the conscience. Shockingly, the terrorist was one to livestream the attack facebook, where the video and its gruesome content went undetected initially. Instead, Law Enforcement officials in new zealand had to contact the company and ask that it be removed. When new zealand authorities called on all social Media Companies to remove these videos immediately, they were unable to comply. Human moderators could not keep up with the volume of videos being reposted, and their Automated Systems were unable to recognize minor changes in the video, so the video spread online and spread around the world. The fact that this happened nearly two years after facebook, twitter, google, microsoft, and other Major Tech Companies established the Global Internet forum to counter terrorism, or pronounced gifct, is troubling, to say the least. The gifct was created for Tech Companies to share best practices and technology to combat the spread of online terrorist content. Back in july 2017, representatives of the gifct briefed this committee on this new initiative. At the time, i was optimistic about its intentions and goals and acknowledged that its members demonstrated initiative and willingness to engage on this issue while others have not. But after a while, and white supremacist terrorists were able to exploit social media platforms in this way, we have reason to doubt the ct andiveness of the gif the companiesefforts. Representatives of gifct briefed this committee in march after the christchurch massacre. Since then, myself and other members of this committee have asked important questions about the organization and have yet to receive satisfactory answers. Today, i hope to get answers regarding your actual efforts to keep terrorist content off your platforms. I want to know how you will prevent content, like the new zealand attack video, from spreading on your platforms again. This committee will continue to engage social Media Companies about the challenges they face in addressing terror content on their platforms. In addition to terror content, i want to hear from our panel about how they are working to keep hate speech and harmful misinformation off their platforms. I want to be very clear, democrats will respect the free speech rights enshrined in the First Amendment, but much of the content i am referring to is either not protected speech or violates the social Media Companies own terms of service. We have seen time and time again that social media platforms are vulnerable to being exploited by bad actors, including those working at the behest of Foreign Governments who seek to sow discord by spreading misinformation. This problem will become more acute as we approach the 2020 elections. We want to understand how companies can strengthen their efforts to deal with the persistent problem. At a fundamental level, todays hearing is about transparency. We want to get an understanding of whether and to what extent social Media Companies are incorporating questions of National Security, public safety, and integrity of our domestic institutions into their business model. I look forward to having that conversation with the witnesses here today and to our ongoing dialogue on behalf of American People. I thank the witnesses for joining us and the members for their participation. With that, i now recognize the Ranking Member of the full committee, the gentleman from alabama, mr. Rogers, for five minutes for the purpose of an opening statement. Rep. Rogers thank you, mr. Chairman. Concern about online content has been here since the creation of the internet. This has peaked over the last decades in which Foreign Terrorists and global supporters have explored the openness of Online Platforms to radicalize, mobilize, and promote their violent messages. These tactics have proved successful, so much so that we are seeing domestic extremists mimic many of the same techniques to gather followers and spread hateful, violent propaganda. Other pressure has grown on social Media Companies to change their terms of service to limit posts linked to terrorism, violent, criminal activity, and most recently the hateful rhetoric of misinformation. The large and mainstream companies have responded to this pressure in a number of ways, including the creation of the Global Internet forum to counter terrorism, or gifct. They are also updating their terms of service and hiring more human content moderators. Todays hearing is also an important opportunity to examine the constitutional limits placed on the government to regulate or restrict free speech. Advocating violent acts and recruiting terrorists online is illegal, but expressing ones political views, however repugnant they may be, is protected under the First Amendment. I was deeply concerned to hear recent news reports about googles policies regarding President Trump and conservative news media. Googles head of responsible innovation, jen gennai, recently said, quote, we all got screwed over in 2016. The people got screwed over, the news media got screwed over, everybody got screwed over, so we rapidly have been like, how we prevent this from happening again . Then ms. Gennai remarke, Elizabeth Warren wants us to break up google. That will not make it better. It will make it worse. All of these Smaller Companies that do not have the similar resources we do will be charged with preventing the next trump situation. Now, ms. Gennai is entitled to her opinion, but we are in trouble if her opinions are googles policy. This report and others like it are a stark reminder of why our founders created the First Amendment. In fact, the video i just quoted from has been removed from youtube, the platform owned by google, who is here today. I have serious questions about googles ability to be fair and balanced when it appears they e colluded with you to youtube to silence negative press coverage. Regulating speech quickly becomes a subjective exercise for the government or the private sector. Noble intentions often give way to bias and personal issues. The solution to this problem is complex and will involve enhanced cooperation by the government, industry, individuals, while protecting Constitutional Rights of all americans. I appreciate our witnesses participation today. I hope todays hearing will be helpful in providing Greater Transparency and understanding of todays complex challenge, and with that, i yield back, mr. Chairman. Rep. Thompson thank you very much. Other members are reminded that under committee rules, Opening Statements may be submitted for the record. I welcome our panel of witnesses. Our first witness, ms. Monika bickert, is the Vice President of Global Policy management at facebook. Next, we are joined by nick nichols, who currently serves as global senior Public Policy strategist at twitter. Our third witness is derek slater, global director of information at google. Finally, we welcome ms. Nadine strossen who serves as a professor of law at new york law school. Without objection, the witnesses full statements will be inserted in the record. I will now ask each witness to summarize their statement for five minutes, beginning with ms. Bickert. Ms. Bickert thank you, chairman thompson, Ranking Member rogers, and members of the committee. And thank you for the opportunity to appear before you today. I am monika bickert, facebooks Vice President for Global Policy management, and i am in charge of our counterterrorism efforts. Before i joined facebook, i prosecuted federal crimes for 11 years at the department of justice. On behalf of our company, i want to thank you for your leadership combating extremism, terrorism, and other threats to our homeland and National Security. I would also like to start by saying that all of us at facebook stand with the victims, their families, and everyone affected by the recent terror attacks, including the horrific violence in sri lanka and new zealand. In the aftermath of these acts, it is even more important to stand together against hate and violence, and we make this a priority in everything that we do at facebook. On terrorist content, our view is simple there is absolutely no place on facebook for terrorists. They are not allowed to use our services under any circumstances. We remove their accounts as soon as we find them. We also remove any content that praises or supports terrorists or their actions, and if we find evidence of imminent harm, we promptly inform authorities. There are three primary ways we are implementing this approach. First with our products, that help stop terrorists and propaganda at the gates. Second, through our people, who help us review content and implement our policies. And third, our partnerships outside of the company, which help us stay ahead of the threat. So first, our products. Facebook has invested significantly in technology to help identify terrorist content, including through the use of Artificial Intelligence, but also using other automation and technology. For instance, we can now identify violating textual posts in 19 different languages. With the help of these improvements, we have taken action on more than 25 million pieces of terrorist content since the beginning of 2018. Of the content that we have removed from facebook for violating our terrorism policies, more than 99 of that is content we found ourselves using our own technical tools before anybody has reported it to us. Second, our people. We now have more than 30,000 people who are working on safety and security across facebook, across the world, and that is three times as many people as we had dedicated to those efforts in 2017. We also have more than 300 highlytrained professionals exclusively or primarily focused on combating terrorist use of our services. Our Team Includes counterterrorism experts, former prosecutors, like myself, former Law Enforcement officials, former intelligence officials, and together, they speak more than 50 languages, and they are able to provide 24hour coverage. Finally, our partnerships. In addition to working with thirdparty intelligence providers to more quickly identify terrorist material on the internet, we also regularly work with academics, who are studying the latest terroristic trends, and government officials. Following the attacks in new zealand, facebook was proud to be a signatory to the christchurch call to action, a ninepoint plan for the industry to better combat terrorist attempts to use our services. We also partner across industry. As the chairman and Ranking Member mentioned, in 2017, we launched the Global Internet forum to counter terrorism, or gifct, with google, microsoft, and twitter. The point of gifct is to share information and to also Share Technology and research to better combat these threats. Through gifct, we have expanded an industry database for companies to share what we call hashes, which are basically digital fingerprints of terrorist content, so that we can all remove it more quickly and help Smaller Companies do that, too. We have also trained over 110 companies from around the globe in best practices for countering terrorists use of the internet. Ok over as the chair of gifct in 2019, and along with our members, we have this year worked to expand our ability, including making new test audio available to other companies, especially Smaller Companies, and we have also improved our crisis protocol. In the wake of the horrific christchurch attacks, we communicated in realtime across our companies and were able to stop hundreds of videos of the attack, despite the fact that bad actors were actively trying to edit the video to upload it to circumvent our system. We know the adversaries are always evolving their tactics, and we have to improve if we want to stay ahead. And though we will never be perfect, we have made progress, and we are committed to tirelessly combating extremism on our platform. I appreciate the opportunity to be here today. I look forward to answering your questions. Thank you. Mr. Pickles chairman thompson, Ranking Member rogers, members of the committee, thank you for the opportunity to be here today. We keep the victims rep. Thompson turn your mic on, please. Can you pull the mic closer . Mr. Pickles sorry. Is that better . Thank you. We keep the victims, their families, and the affected communities in christchurch and around the world in our minds as we undertake this important work. Weve made the health of twitter our top priority and measure our encourage how we Critical Thinking on the platform. Conversely, hateful contact, terrorist content, and deceptive practices detract from the health of the platform. I would like to begin by outlining three key policies. Firstly, twitter takes a zerotolerance approach to terroristic acts on our platform. Individuals may not engage in terrorism recruitment or terrorist acts. Since 2015, we have suspended more than 1. 5 million accounts for violation of our rules related to the promotion of terrorism and continue to see more than 90 of the accounts suspended through proactive measures. In majority of the cases, we take action at the account creation stage before the account has even tweeted. The remaining 10 is through a combination of user reports and partnerships. Secondly, we prohibit the use of twitter by violent extremist groups. These are defined as groups who, whether by their statements on or off the platform, promote violence against civilians or use violence against civilians, to further their cause, whatever their ideology. Since the introduction of this policy in 2017, we have taken action on 184 groups globally and permanently suspended more than 2000 unique accounts. Thirdly, twitter does not allow hateful content on its service. An individual on twitter is not permitted to promote violence or directly attack or threaten people based on protected characteristics. Where any of these rules are broken, we will take action to remove the content and will permanently remove those who promote terrorism on twitter. As you have heard, twitter is a member of the Global Internet forum of counter terrorism, in partnership with facebook, google, and microsoft, providing technical sharing and information across industry as well as providing essential support to Smaller Companies. We learned a number of lessons from the christchurch attacks. The distribution of media was manifestly different from how other terror organizations worked. This reflects a change in the wider threat environment that requires a renewed approach and a focus on Crisis Response. After christchurch, an array of individuals online sought to continuously reupload the content created by the attacker, both the video and manifesto. The broader internet ecosystem presented then and still presents a challenge we cannot avoid. A range of thirdparty services were used to share content, including forums and websites that have long hosted some of the most egregious content available online. Our analysis found that 70 of the views of the video posted by the christchurch attacker came from verified accounts on twitter, including news organizations and individuals posting the video to condemn the attack. We are committed to learning and improving, but every entity has a part to play. We should also take some heart from the social examples we have seen on twitter around the world, as users come together to challenge hate and challenge division. Hashtags like pray for orlando, je suis charlie, or after the christchurch attacks, hello brother, reject terrorist narratives and offer a Better Future for us all. In the months since the attack, governments, industry, and Civil Society have united to commit to a safe, secure, open, and Global Internet. In fulfilling our commitment to the christchurch call, we will take a wide range of actions, including to continue investing in technology, so we can respond as quickly as possible to future instance. Let me now turn to our approach to dealing with attempts to affect public conversation. As a uniquely open service, twitter enables the clarification of falsehoods in realtime. We proactively enforce policies and use technologies to halt the spread of content propagated through manipulative tactics. Our rules clearly prohibit account manipulation, malicious information, and fake accounts. We continue to explore how we may take further actions through both policy and product to address these in the future. We continue to critically examine additional safeguards we can implement to safeguard the healthy conversations on twitter. We look forward to working with the committee on these important issues. Thank you. Rep. Thompson thank you for your testimony. I now recognize mr. Slater to summarize his testimony for five minutes. Mr. Slater chairman thompson, Ranking Member rogers, and distinguished members of the committee, thank you for the opportunity to appear before you today. I appreciate your leadership on the important issues of radicalization and misinformation online and welcome the opportunity to discuss googles work in these areas. My name is derek slater, and i am the global director of information policy at google. In my role, i lead a team of Public Policy framework on online content. At google, we believe that the internet has been a force for creativity, learning, and access to information, supporting the free flow of ideas is core to our mission, to make the world universally accessible and useful. Yet there have always been legitimate limits, even where laws protect free expression, and this is true both online and off, especially when it comes to issues of terrorism, hate speech, and misinformation. We take these issues seriously and want to be a part of the solution. In my testimony today, i will focus on two areas where we are making progress to protect our users. First, on the enforcement of our policies around terrorism and hate speech, and second, in combating misinformation broadly. On youtube, we have rigorous policies and programs to defend against the use of our platform to spread hate or incite violence. Over the past two years, we have invested heavily in machines and people to quickly identify and remove content that violates our policies. First, youtubes enforcement system starts at the point at which a user uploads a video. If it is somewhat similar to videos that violate our policies, it is automatically sent for humans to review. If they determine it violates our policies, they remove it, and the system makes a digital fingerprint, so it cannot be uploaded again. In the First Quarter of 2019, over 75 of the more than 8 million videos removed were first flagged by a machine. The majority of which were removed before a single view was received. Second, we also rely on experts to find videos that the algorithm might be missing. Some of these experts sit at our info desk, which proactively looks for new trends on content that might violate our policy. We also allow for expert ngos and governments to notify us when bad content appears through our trusted flagger program. Finally, we go beyond enforcing our policy by creating programs to promote counter speech. Examples of this work include our creators for change program, which supports youtube creators that are acting as positive role models. In addition, alphabets jigsaw group has deployed the redirect method which uses targeted ads and videos to disrupt online radicalization. This broad and crosssectional work has led to tangible results. In the First Quarter of 2019, youtube manually reviewed over one million suspected terrorist videos and found that fewer than 10 , about 90,000, violated our terrorism policy. As a comparison point, we typically remove between 7 million and 9 million per quarter, a tiny fraction of a percent of youtubes total views during this time period. Our efforts do not stop there. We are constantly taking input and reacting to new situations. For example, youtube recently further updated its hate speech policy specifically prohibiting videos alleging that a group is superior in order to justify segregation, exclusion, based on qualities like age, gender, race, sexual orientation, or veteran status. Similarly, the recent tragic events in christchurch presented some unprecedented challenges. In response, we took more drastic measures, such as automatically rejecting new uploads of videos without waiting for human review to see if it was news content. We are now reexamining our crisis protocols and also signed the christchurch call to action. Finally, we are deeply committed to working with government, the tech industry, experts in Civil Society, and academia to protect our services from being as actors,d by a bad including googles chairmanship in the last year and a half. On the topic of combating misinformation, we have a natural longterm incentives to prevent anyone from interfering with the integrity of our products. We also recognize that it is critically important to combat misinformation in the context of democratic elections, when our users seek accurate, trusted information that will help them make critical decisions. We have worked hard to curb misinformation in our products. Our efforts include implementing policies against monetization of misrepresentative content, and employing multiple teams that identify and take action against malicious actors. At the same time, we have to be mindful that our platform reflects a broad array of sources. There are free speech considerations. There is no silver bullet, and we will continue to work to get it right. In conclusion, we want to do everything we can to ensure users are not exposed to harmful content. We understand these are difficult issues of serious interest to the committee. We take them seriously. We want to be responsible actors and do our part. Thank you for your time. I look forward to taking your questions. Rep. Thompson thank you for your testimony. I now recognize ms. Strossen. Dr. Strossen thank you so much. Thank you so much chairman thompson and Ranking Members. And other members of the committee. My name is nadine strossen. I am a professor of law at new and immediatel past president of the American Civil Liberties union. Of great pertinence, last year, i wrote a book that is pertinent to the topic of this hearing, called hate why we should resist it with free speech, not censorship. I note, mr. Chairman, you referred to hate speech as problematic content, in addition to terror content and misinformation. Speechthese kinds of present enormous dangers, when we empower government or private companies to censor and suppress speech for this reason. The concepts of hate speech, terrorist content, and misinformation are all irreducibly vague and broad. Therefore, having to be enforced according to the subjective discretion of the enforcing authority. And the discretion has been enforced in ways that both under suppress speech that does pose a serious danger as the chairman and Ranking Member pointed out. But also do suppress important speech, as also has been pointed out, speech that counters terrorism and other dangers. What is worse is that in addition to violating freespeech and democracy norms, these measures are not effective in dealing with the underlying problems. I thought that was something pointed out by comments by my copanelists. In particular, nick pickles written testimony talked about the fact that if somebody is driven off one of these platforms, they will then take refuge in darker corners of the web where it is much harder to engage with them and use them as sources of information for Law Enforcement and counterterrorism investigations. So, we should emphasize other approaches that are consistent with free speech and democracy, but have been lauded as at least as effective and perhaps even more so than suppression. I was very heartened that the written statements of my copanelists all emphasized these other approaches. Monika bickerts testimony talked about how essential it is to go after the root causes of terrorism, and the testimony of nick pickles and derek slater also emphasized the importance of counter speech and counter narratives and redirection. I recognize that every single one of us in this room is completely committed to free speech and democracy, just as every single one of us is committed to countering terrorism and disinformation. After all the reason we oppose , them is because of the harm they do to democracy and liberty. Before i say anything further, i have to stress something i know everybody here knows but many members of the public do not. These social Media Companies are not bound by the First Amendment freespeech guarantee. None of us have a freespeech right to air any content on their platforms at all. Conversely, they have their own freespeech rights to choose what will be and what will not be on their platforms. It would be unconstitutional, of course, for congress to purport to tell them what they must put up and what they must take down to the extent that the takedowns would go beyond First Amendment unprotected speech. And chairman thompson, you did completely accurately note that much of the content that is targeted as terrorist is unprotected, but much of it is protected under the constitution and much of it is valuable, including human Rights Advocacy that has been suppressed under these overbroad and subjective standards. Although the social Media Companies do not have a constitutional obligation, given their enormous power, it is incredibly important that they be encouraged to do so. In closing, i am going to quote a statement from the written testimony of nick pickles, which i could not agree with more. He said, we will not solve the problems by removing content alone. We should not underestimate the power of open conversation to change minds, perspectives, and behaviors. Thank you very much. Rep. Thompson i thank all the witnesses for their testimony. And i remind each member that he or she will have five minutes to question the panelists. I now recognize myself for questions. Misinformation is some of this committees challenge as it relates to this hearing as well as terrorist content. Lets take for instance, the recent doctored video of Speaker Nancy Pelosi that made her appear to be drunk, slurring her words. Facebook and twitter left up the video, but youtube took it down. Everybody agreed that something was wrong with it. Facebook, again, took a different approach. So i want ms. Bickert and mr. Pickles to explain how you decided the process of leaving this video up on facebook and twitter. And then, mr. Slater, i want you to explain to me why youtube decided to take it down. Ms. Bickert . Ms. Bickert thank you, mr. Chairman. Let me first say misinformation is a top concern, especially as we are getting ready for the 2020 elections. We know this is something we have to get right and we are especially focused on what we should be doing with increasingly sophisticated manipulated media. Let me first speak to our general approach to misinformation, which is we remove content when it violates our Community Standards. Beyond that, if we see someone who is sharing misinformation, we want to make sure we are reducing the distribution and also providing Accurate Information from independent, factchecking organizations so that people can put in context what they see. To do that, we work with 45 independent factchecking organizations from around the world, each of which is certified by pointer as being independent and meeting certain principles. And as soon as we find something that factchecking organizations rate faults on our platform, we reduce distribution and put next to it related articles so that anyone who shares that gets a warning that this has been rated false. Everybody who did share before we got in factcheckers rating, gets a notification that the content has been rated false by a fact checker, and we are putting next to it related articles from the factchecking organization. Rep. Thompson how long did it take you to do that for the pelosi video . Ms. Bickert the video was uploaded to facebook on wednesday, may 22 late morning and on thursday, around 6 30 p. M. , a factchecking organization rated it as false, and we immediately downranked it and put misinformation. That is something where we think we need to get faster. We need to make sure that we are getting this information to people as soon as we can. It is also a reason that at 6 30 p. M. Rep. Thompson so it took you about a day and a half. Ms. Bickert yes, mr. Chairman. Rep. Thompson mr. Pickles. Mr. Pickles as monika said, in the process, we review this against our rules, any content that breaks our rules, we will remove. We are also very aware people use manipulative tactics to spread this content, so we take action on the distribution and the content. This is a policy area were looking at right now, not just in the case where videos might be manipulated, but also where videos are fabricated and where there is a process of creating media may be artificial. The best way to approach this is with a policy and product approach that covers in some cases removing it. Rep. Thompson i understand. Get to why you left it up. Mr. Pickles if the video does not break the rules and the account does not break the rules, but it is a policy area were looking at now whether this is the correct framework for dealing with this challenge. Rep. Thompson so if it is false or misinformation, that does not break your rules. Mr. Pickles not at the present time. Rep. Thompson thank you. Mr. Slater. Mr. Slater we have tough Community Guidelines to layout the rules of the road. And what is identified by machines or users, we will identify and remove. The video violated our policies around deceptive practices, and we removed it. Rep. Thompson our committee is tasked with looking at misinformation and other things, we are not trying to regulate companies. Terrorist content can be a mislabeled document. Ms. Strossen, talk to us about your position with that. Dr. Strossen the difficulty and the inherent subjectivity of these concepts is illustrated by the fact that we have three companies that have subscribed to essentially the same general commitments, and yet are interpreting the details very differently with respect to specific content. We see that over and over again. Ultimately, the only protection that we are going to have in this society against disinformation is through training and education starting at the earliest levels of a childs education in media literacy. Because congress could never protect against misinformation in traditional media, right . Unless it meets the very strict standard of defamation that is punishable and fraud that is punishable. Content, including the pelosi video, is completely constitutionally protected. Rep. Thompson thank you. I yield to the Ranking Member for his questions. Rep. Rogers thank you. Mr. Slater, the video are referenced, have you seen it . Mr. Slater i have not seen the full video, but i am aware of what youre talking about. Rep. Rogers would you like to respond to the comments i offered about what was said . Mr. Slater could you be specific, what would you like me to respond to . Rep. Rogers when she basically said we cannot let google be broken up, because the Smaller Companies will not have the same resources we have to stop trump from getting reelected. Mr. Slater thank you for the clarification. Let me be clear. This was reported without our consent. I believe these statements were taken out of context. How we adjust the issue you are talking about, no employee in the lower ranks up to Senior Executives has the ability to manipulate our search results based on our products or Services Based on their political ideology. We design, develop our products for everyone. We mean everyone. And we do that to provide relevant results, authoritative results. We are in the trust business. We have a longterm incentive to get that right, and we do that in a transparent fashion. You can read more on our website. We have guidelines that are public on the web that describe how we look at ratings. And we have robust systems and checks and balances in place to make sure those are rigorously adhered to as we set up our systems. Rep. Rogers ok, i recognize that she was being videotaped without her knowledge. But the statements i quoted from were full, complete statements that were not edited. It is concerning when you see somebody who is an executive at google, and there was more than one in the video, by the way, of making statements that indicate it is managements policy within google to try to manipulate information to cause one or another candidate for president of the United States, or, for that matter, any other office, to be successful or not be successful. That is what gave rise to my concern. Is it do we have reason to be concerned that google has a pervasive nature in the company to push one Political Party over another, in the way it conducts its business . Mr. Slater congressman, i appreciate the concern, but let me be clear again. In terms of what our policy is, from the highest level on down and what our practice has been, and structures and checks and balances is about. We do not allow anyone, lower level, higher level, to manipulate products in that way. Rep. Rogers i hope it is not the culture at any of your platforms, because you are very powerful in our country. Ms. Strossen, you raised concerns in your testimony, that while social Media Companies can legally decide what content is allowed on their platforms, what are your recommendations to these Companies Regarding content moderation without censorship . Dr. Strossen thank you, Ranking Member rogers. I would, first of all, endorse at least the transparency that both you and chairman thompson stressed in your opening remarks. And in addition, other process related guarantees, such as due process, the right to appeal, and a clear statement of standards. I would also recommend standards that respect the freespeech guarantees, not only in the United States constitution, but of International Human rights that the United NationsHuman Rights Council has recommended in a nonbinding way that powerful Companies Adopt. And that would mean that content could not be suppressed unless it posed an emergency. That it directly caused certain specific, serious imminent harm that cannot be prevented other than through suppression. Short of that, as you indicated, for example, Ranking Member rogers, politically controversial, even repugnant speech should be protected. We may very much disagree with the message, but the most effective as well as principled way to oppose it is through more speech, and i would certainly recommend, as i did in my written testimony, that these Companies Adopt userempowering technology that would allow us to make truly informed voluntary decisions about what we see and what we dont see and not manipulate us, as has been reported many times, into increasing rabbit holes and echo chambers, but give us the opportunity to make her own own choices and choose our own communities. Rep. Rogers thank you. I yield back. Rep. Thompson thank you. Chair recognizes the gentlelady from texas, ms. Sheila jackson lee. Rep. Jackson lee i thank the chairman and Ranking Member. Let me indicate there is, known to the public, the fourth estate. And i might say we have a fifth estate, which is all of you, and others, that represent the social media empire. And i believe it is important that we Work Together to find the right pathway for how america will be a leader and how we balance the responsibilities and rights of such a giant entity and rights and privileges of the American People and the sanctity and security of the American People. Social media statistics from 2019 show there are 3. 2 billion social media users worldwide, and this number is only growing. That equates to about 42 of the current world population. That is enormous. I know the numbers are just as daunting in the United States. So, let me ask a few questions, and i would appreciate brevity because of the necessity to try to get as much in as possible. On march 15, 2019, worshipers were slaughtered in the midst of their prayers in christchurch, new zealand. The gunman livestreamed the first attack on Facebook Live. So my question to you, ms. Bickert, is, can you today assure the committee that there will never be another attack of this nature that will be streamed as it is happening over Facebook Live . You mentioned 30,300. And so, i hope they may contribute to your answer. I yield to you for your answer. Ms. Bickert congresswoman, thank you. The video was appalling. The attack of course is an unspeakable tragedy. And we want to make sure we are doing everything to make sure it does not happen again and it is not livestreamed again. One of the things we have done is we have changed access to Facebook Live, so that people who have a serious content policy violation are restricted from using it. So the person who livestreamed the new zealand attack rep. Jackson lee what is that likelihood of you being able to commit that that will not half and again, in terms of the new structures you have put in place . Ms. Bickert well, the technology we are working to develop, the technology is not perfect. Artificial intelligence is a key component in us recognizing videos before they are reported to us. This video, about 250 people saw it while it was live on facebook. Nobody reported it. Rep. Jackson lee my time is short. Do you have a percentage, 50 , 60 . Ms. Bickert with the technology, i cannot give a percentage. I can say we are working with governments and others to improve that technology so that we will be able to better recognize. Rep. Jackson lee mr. Pickles and mr. Slater, if you would, ms. Bickert raised the question about Artificial Intelligence. If you would respond about the utilization of ai and individuals as briefly as possible, please. Mr. Pickles one of the challenges twitter has is there is not a lot of content and one of the challenges in christchurch was we did not see the same video uploaded, we saw different snippets that took different lengths. We are investing in technology to make sure people cannot reuploade content that has been removed. We are making changes to where people manipulate media, we can move quicker. Rep. Jackson lee using human subjects and ai . Mr. Pickles it is machine rep. Ing and humans, yes he jackson lee mr. Slater. Mr. Slater thank you, congresswoman. It is a combination of Machine Learning and people to review. Speaking overall in the First Quarter of 2019, 75 of the 8 million videos we removed, they were first flagged by a machine, and the majority were removed before a single view. When it comes to violent extremism, it is even stronger. So over 90 of the violent extremist videos that were uploaded were removed in the past six months, were removed before a single human flag and 88 with less than 10 views. Rep. Jackson lee thank you. Let me ask about deepfakes. In the 2020 election, what you will do to recognize the fact that deepfakes can be a distortion of an election that is really the premise of our democracy . Can you quickly answer that question . At the same time, i want to make mention of the fact that free speech does not allow incitement, fighting words, threats, and otherwise. Could you answer that . Briefly. Ms. Bickert yes. Rep. Jackson lee the deepfakes as briefly as you can. Ms. Bickert we are working with experts outside the company and others to make sure we understand how deepfakes can be used and come up with a comprehensive policy to address them. In the meantime, we are focused on removing fake accounts, which are disproportionately responsible for this type of content, and that we are improving the speed at which we counter misinformation with actual factual articles and reduce distribution. Rep. Jackson lee mr. Pickles. Mr. Pickles we are working on a product and policy solution. One of the things we have in place is if anyone presents any misinformation about how to vote that lends to voter suppression, we will remove that now. That policy has been in place for some time. Rep. Jackson lee mr. Slater. Mr. Slater we are investing in working with researchers and others to build capacities in the space. We have an intel desk that scans the horizon for new threats. We are looking at this sort of issue. Rep. Jackson lee thank you for your courtesy. I yield back, mr. Chairman. Rep. Thompson the gentleman from north carolina. While we were sitting here today, i looked up on the internet that facebook and twitter apologizes. There were more pages than i can account count going through the apologies. Mr. Pickles and mr. Slater, one of you used hateful content, mr. Pickles. Mr. Slater, you used the expression hate speech. What i did not hear you say in that group of people that you listed were those who were wanting to express their faith. In april, one of the larger april,s you have made in Kelly Harkness brought us to the attention of Abby Johnsons life story in a movie called unplanned. That has gone on to make 20 million at the box office, but google listed that as propaganda. Was that google that listed it or an individual . Notongressman, i am familiar with the video in question it is not a video, it is a movie. A larger story in april of this year, a major motion picture. Did it come across your radar . I am not familiar with that specific video. When we talk about the difference between hateful content and hate speech, mr. Pickles, in june of this year, marco rubio brought the terms that google was banning any kind of language that was offensive to china. How does twitter use their discretion to block information without discriminating against different individuals or groups . All, our rules identify hateful conduct, so we focus on behavior first. How did two accounts interact . Views on offensive twitter and views that people will disagree with strongly on twitter. The difference between that and targeting somebody else is the difference between content and conduct. Our rules do not have ideology in them. They are enforced without ideology, and where we make mistakes is important for us to recognize. One of the challenges we have, when we remove the ones from twitter and they come back for a different purpose, our technology will recognize that person trying to come back on twitter. We do not want them to come back to the platform that we have removed. Sometimes that does catch people who have a different purpose. There is both the value to technology, but we should recognize where we make a mistake. Enforce your policies to make sure they are being followed and not enforced by bias . Ofwe have a robust System Development and enforcement of our policies. We are constantly reviewing and analyzing the policies themselves to understand whether they are fit for purpose or drawing the right lines. Reviewers go through extensive training to make sure we have a consistent approach. We draw those viewers from around the country, around the world, and train them very deeply. I want to keep moving. What kind of training do you provide, if any, for your human content moderators in avoiding subjectivity and bias . We provide robust training to make sure we are implying consistent rules. What is robust training . When reviewers are brought on board, before they are allowed to review, we provide them with a set of educational materials and detailed steps, and in addition they are reviewed by managers and others to make sure they can correct mistakes and learn from those mistakes and so on. Do you think ai will ever get to the point where you can rely solely on it to moderate content , or do you think human moderation will always play a role . Thank you for the question, congressman. In the near future, human moderation is important for this. Good fortion is matching known images of terror propaganda or child sexual abuse , it is not good at making calls around things like hate speech or bullying. Couple of questions, mr. Pickles, do you have any idea how many times twitter apologizes per month for missing additional content . We take action on appeals regularly. Do you have a number on hand . Not on hand, but i can follow up. Do you not many times google apologizes for mismanaging content a month . I do not, but i will come back to you with that number. I think you have apologized more than kanye west has to taylor swift. I yield back. The chair recognizes the gentlelady from illinois, ms. Underwood. In march, two weeks after the christchurch terror attack, facebook would direct users searching for white supremacist terms to life after hate. Life after hate is based in chicago, so i met with them last month in illinois. They told me since facebooks announcement, they have seen a large bump in active 80 that has not slowed down. Facebook and in graham have 3 million users combined. Life after hate is a tiny organization with federal funding that was pulled by the administration. They do great work but do not have the resources to handle every single the own nazi every single neonazi on the internet alone. When you consider donating to life after hate for this partnership . Life after hate is doing great work with us, and we are redirecting people searching for these terms to this content. We did this in some other areas as well, particularly with self harm support groups, and we do see sometimes that they are under resourced. Willis something we come back to you on, but we are committed to making sure this works. So there is no longterm funding commitment, but you will consider it . Im not sure with the details are, but i will follow up. We really would appreciate that followup for exact information. The years, over youtube has put forward various policy changes in an attempt to limit how easily dangerous conspiracy during videos. For example, youtube announced it would display information used in the form of links to wikipedia. In those 15 months since the policy was announced, the percentage what percentage of viewers click on the link for more information . Thank you for the question. This is a very important issue. \e display these contextual cue. I do not have a percentage on how many have clicked through, but i will get back to you. Most wikipedia article can be edited by anyone on the internet. Vet thetube articles to ensure their accuracy, or do you work with wikipedia to make sure the articles are blocked against malicious edits . We work to raise authoritative information and ensure what we are displaying is trustworthy and correct any mistakes that we may make. All have corrected the Wikipedia Pages of it is incorrect . No, i am sorry. Before we display such things, we ensure we have a robust policy to make sure we are displaying Accurate Information. The question is about what you are linking to. Yes. Can you follow up with us in writing on that one . Links tohas displayed additional reporting that contains misinformation. What percentage of viewers click through . I do not have that information, but i will followup in writing quickly. I would like the clerk to display the screenshots might staff provided earlier. Last month, instagram announced it would hide search results for relates that misinformation. I did a search yesterday, and these are the results. The majority of these responses s. Ploy antivax hashtag these are not niche terms and the content is not hard to find. Vaccine misinformation is not a new in issue. Antivax content is a deadly threat to public health. What additional steps can instagram commit to taking to make sure this content is not promoted . Vaccine hoaxes and misinformation are top of mind for us. We have launched some recent measures, but i want to tell you how we are Getting Better on those. One thing, when accounts are sharing this information, we are trying to link them in the search results as well. That is something that requires manual review, to make sure we are doing that right. Another thing is actually servicing educational content, and we are working with major Health Organizations to work on that. If they search for this, they will see they are working with the help organizations right now and we should have that content up and running soon. I can follow through on the details of that. Is critically important that the new information is shared with users at the time that they search for it, which we know is ongoing. Everyone in this room appreciates that online extremism and misinformation are problems, but these are not new challenges, and failing to respond seriously to them is dangerous. Social media helps extremists find each other, helps make their opinions more extreme, and hurts our communities. We want strong policies from allowing to keep us safe, and i believe your policies are wellintentioned, but there is a lot more that needs to be done. Friendly, so frankly, some of it could have been done already. Thank you and i yield back. Thank you. The chair recognizes the gentleman from new york for five minutes. For being here today. It is obvious from this conversation that this is a very difficult area to maneuver in. I understand about your concerns about First Amendment infringement, and i am applauding the companies trying to find that delicate balance. Since you are not a government entity, you have better flexibility on how you do that. It is up to you to use that flexibility to do the best job you possibly can. I will get back to you in a minute. I amlater, to make sure perfectly clear on what you are saying here, i am well aware from your testimony previously the policies and practices are at google. The video mr. Rogers did likeence shows people they are talking about a very serious political bias and their intent to implement that bias in their job. Happened, iot that dont know, i am not asking about the policies and practices. I am asking if you have ever been made aware of someone who has done that, used political bias at google to alter content, or first of all, have you ever heard that . I want to note if you have heard that. I am not aware of any situation in our checks and balances that would prevent that. You have not heard of that ever since your time at google . Correct. Ok. And the allegation that congressman walker referenced about the abortion movie, you have not heard anything about the contact with that respect as well, the content . I am not aware of that video, no. And you have not heard of anyone we would remove adware it violates our policies but have you ever heard of the difference . Not when your policies and practices are, what you are personally aware of . I personally understand, and i am not aware of a situation like that. Thank you. I want to talk to all of you here today this internet tc, the lamest acronym ever, by the way, can mr. Pickles give me a little detail about what the goal is of this . This form . I think the critical thing is is about bringing together companies who have investment on countering terrorism, but recognizing the challenge is far bigger. We need to support Small Company small companies, Fund Research so we have a research network, and finally sharing technical tools. You have heard people reference these digital fingerprints to make sure whether it is a fingerprint or, in twitters case, we share the url. If we take down an account spreading a terrorist manual and it is linked to a company, we will tell the company head, the terrorist account is linked to something on your server. Check it out. And working in the malware arena, correct . Yes. What companies are members of this . Is it a bunch or a limited number . Youtube, twitter, microsoft, and facebook. Dropbox has now joined. We have a partnership with tech against terrorism, which allows all companies to go through it training process. They learn how to write their terms of service and enforce their terms of service, mentoring them. That is where we are hopeful that we will have more Companies Joining and growing this, but the sharing consortium has many members, 15 members. We share urls with 13 companies. It is broad, but we want it to have a high standard. We wants to keep a high bar and bring people in. As far as the encrypted messaging platforms, i take it they are not all participants in this, are they . I am not the best person to ask, honestly. Would you know, ms. Bickert . Thank you. The main members are those five companies. The Smaller Companies who have been trained, that does include some of the encrypted messaging services. Some of this is about understanding the right lines to draw, how to work with lawenforcement authorities, which encrypted Communication Services can definitely do. My biggest concern about the big players in this field, you seem to be endeavoring to do the right thing, especially when it comes to counterterrorism. Encrypted messaging platforms, by and large, is a broader field to play in and there does not seem to be much we can do to stop their content from violence. And their i know my time is up, but perhaps in writing, i would love to know how to entice some of them to be part of this effort. The encryption is obviously a breeding ground for white supremacists, violence of all sorts, and im trying to get the companies to be more responsible. Im not sure what the bottom line profit making would be, but i would love to hear from you guys. I yield back. The chair recognizes the gentlelady from michigan for five minutes. Good morning, thank you for being here. I want to switch gears and talk about the influence and the spread of foreignbased information, foreignbased political ads in particular in our political process. Us read the Mueller Report page by page, and i was the facebookat general counsel stated for the record that for the low, low price of 100,000, a Russian AssociatedInternet Research agency got to 126 million american eyeballs. Im interested in this because the political ads that they put forward were specifically targeted to swing states, and michigan is one of those states. We saw an overabundance of these ads. They were specifically paid for by foreign entities and they were advocating for or against a candidate in our political process. I have a serious problem with that. Separate from the issues of speech and what unamerican american does or does not have the right to say, can you speak specifically to facebooks reaction to the fact that they for an purchased information it does not matter to me that it was russian it could be chinese or havean, and what steps you taken since 2016 to prevent the spread of foreign information . Thebsolutely, thank you for question. Where we were in 2016, we are in a much, much better place. Let me share with you some of the steps we have taken. Worst of all, all of those ads came from fake accounts, and we have a policy against fake accounts, and we have gotten better at enforcing it. We are now stopping more than one million accounts, fake accounts per day at the time of upload. We have published stats on the fake accounts we are removing every quarter, and you can see how much better we have gotten in the past two years. Another thing we are doing is requiring unprecedented levels of transparency. Now if you want to run a political or political issue ad in the United States, you have to first verify your identity, show you are an american, meaning we send you something because we have seen fake ids uploaded from advertisers. We send you something through the mail and you then get a code and upload for us the government id. We verify that you are a real american and we put a paid for disclaimer on the political ad and put it in a library we created that is visible to everybody, so even if you do not have a facebook account you can see this library, and you can search what types of ads are appearing, who is paying for them, and how they are being targeted and so forth. That is good to hear. I would love to see those reports and be directed to them so i can see them. For others at the table, can you talk about your specific policy in grief on the spread of foreign political ads for or against a candidate running for office in the United States . The first thing we did was to ban russia today and ill and its associated entities from using our products going forward. We took all of the revenue from russia today and associated entities and our Funding Research and partnerships with organizations like the Atlantic Council to Research Better how we can prevent against this. We took the unprecedented step of publishing every tweet, not just the paid for once, every tweet that was produced by a foreign influence operation in a public are public archive. You can access more than a terabyte of videos and photographs in a public archive that include operations from russia, venezuela, iran, and other countries. Thank you for the question. Looking backwards at 2016, we found limited improper activity on our platforms that is a product of our Threat Analysis Group and our other tools to root out that sort of behavior. Looking forward, we continue to invest in that, as well as our election transparency efforts requiring verification of advertisers for federal candidates, disclosure in the ads, and the transparency report. What about the spread of information through bots . What kind of disclosure requirements, so when someone is receiving or viewing something, they have some way of knowing who produced it, who is spreading its, whether it is a human being, machine why dont we start with facebook . One of our policies is you have to have your real name and be using an account authentically. When we are removing bot accounts, we are removing them from being a accounts. Those are numbers that we publish. Week we challenge between 8 million and 10 million accounts for breaking our rules on suspicious activity, including automation. 8 million to 10 million challenged are removed every week. We have strict policy about representation in ads and misinformation. We are working with a Threat Analysis Group for coordinated behavior and are taking action when appropriate. My time has expired. Thank you. The chair recognizes the gentleman from louisiana for five minutes. Mr. Slater, are you ready . [laughter] get your scripted answers ready, sir. Google and youtube are developing quite a poor reputation in our nation. A Clear History of repetitively silencing and banning voices and serve it a more liberal does not concern me right now, we are talking about freedom of speech and access to open communications. Discussere today to extremist content, violet threats, terrorist recruiting tactics and the instigation of violence to get the same justification your platform uses to quell true extremism is often used to silence and restrict the voices that you disagree with. We dont like it. University, purdue a series of fiveminute videos which discuss political issues, religion, economic topics from conservative perspectives have had over 50 of their videos restricted. Some of those restricted videos include why america must lead. I think that is a question that should be directed to america. Because of our stance for freedom. We are all voices to be heard. The 10 commandments do not murder the video pulled by your people. What is wrong with the 10 commandments, might i ask . Why didcan america fight the korean war . A significant reflection on the history of our nations pulled. Additionally, youtube removed the video from project veritas, which shows a senior google executive acknowledging politically motivated search manipulation with an intent to influence election outcomes. None of us here want that on either side of this aisle. I do not know a man or woman present who is not a true. Atriot and love their country various ideological perspectives, yes, but we love our country and will stand for freedom. A frequent reason provided by youtube is the content in question harms the Broader Community. What could be more harmful than the restriction of our free speech and open communications , regardless of our ideological stance . Please define for america, what do you mean by harm to the Broader Community as it is used to justify restricting the content on google or youtube, harmoint out its limited to physical threats or is insight of violence, or it a convenient justification to restrict content that you deem needs to be restricted. Please explain to america how d todetermine what is harme the community. Lets have your scripted answer. Thank you for the question. I appreciate the concern and the desire to foster a robust debate. We want you to to be a place where everyone can share their voice and get a view of the world. But you dont allow everyone to share their voice. I have given examples in my brief time. Thank you, mr. Chairman, for recognizing my time. The First Amendment protects the right to express viewpoints online. Its something that offends an individual or something and individual agrees with, does that mean youre meet your companys definition of extreme . We have our Company Guidelines that lay out what is and is not allowed on the platform. You can clarify what you are asking about specifically, i will be happy to try and answer. Mr. Slater, god bless you, sir. Google is in a bind. Today, america is watching. Today, america is taking a step back. We are looking at the services, we are looking at the platforms finding,se, and we are to our horror, they cant be trusted. Today, america is looking carefully at google and the word reverberates through the minds of americans freedom. Protected shall it be protected, preserves, or shall it be persecuted . Will and whim of a massive tech company. Mr. Chairman, thank you for recognizing my time. I yield the balance. Thank you for holding this hearing today. The chair recognizes the gentlelady from new york for five minutes. Thank you very much, and i think our panel for appearing before us today. I want to go into the issue of deepfakes, because that was recently introduced legislation, the first in a house bill, to regulate technologies. If my bill passes, what it would do is make sure that deepfake videos include a prominent, unambiguous disclosure, as well as a digital watermark that cannot be removed. Whethertion i have is your platforms, when it comes to your attentions that a video has been substantially altered or entirely fabricated, how your company decides whether to do nothing, label it, or remove it . Thats for the panel. Thank you for the question. Comes to deepfakes, this is a real top priority for us, especially because of the coming elections. Is wenow, our approach try to use thirdparty Fact Checking organizations, there are 35 of them worldwide. If they rate something as false, they can tell us it has been manipulated. At that point, we will put the information next to it. Like the label approach, this is a way of letting people understand that this is something that is, in fact, false. We also reduce distribution of it. We are also looking to see if donehing should be specifically in the area of deepfakes. We want to have a conference of comprehensive solution, and that means we need a comprehensive definition of what is a deepfake. My bill would require there is a digital watermark, similar to how your companies do a hash of terrorist content. If there were not a central database of deceptive deepfake hashes, could you utilize that . Im happy to pick up on the previous question. I was at a conference in london a few weeks ago hosted by the called digital witness. They work on issues around verifying media for all sorts of war crimes. Goes from a whole spectrum of content, edited to synthetic to simulated. Weevery partnership is one want to explore to make sure we have the information. Inhink your framing of how some circumstances there may be circumstances to remove content, in other circumstances it is about providing context to the user and giving them more information. That is the best balance of making sure we have the tools available to us, and that is the approach we are developing now. Time is not your friend here. We are trying to find something universal that creates transparency, respects the First Amendment, but makes sure it is something that, as americans whose eyes are constantly on video, something you can identify right away. If you have to go through all of these sources to determine if each platform has a different way of indicating it, it almost nullifies that. I wanted to put that on your radar, because i think there needs to be some sort of universal way in which americans can detect immediately that what they are seeing is altered in some form or fashion. I think that is that is what my bill seeks to do. Released arussia fake video before the 2020 election of a candidate accepting a bribe or committing a crime. If your companies learn of the deepfake video being promoted by a Foreign Government to influence our election, would you commit to removing it . How would you handle such a scenario . It . You thought about give us your thoughts. I have a lot of time. A realresswoman, we have name requirement on facebook and have various transparency requirements we enforce. If it is chaired by someone violating our transparency requirements, we would simply remove it. We have a clear policy on affiliated behavior. Activity affiliated with an entity we have already removed we have removed millions of tweets connected to Internet Research agencies and any activity affiliated with that organization. This is a critical issue, and we want to discuss our overall policies as we would in any sort of foreign interference. Refrainhairman, i will from talking to you more about this. We have to get to that sweet spot, and we are not there. The chair recognizes the gentlelady from arizona for five minutes. Reading igo, required had was the book 1984. This Committee Hearing is scaring the heck out of me. I have to tell you, it really is because here we are, talking about if somebody googles vaccines, the answer is oh, we are going to put above what the person is looking for what we think is best. Who are the people judging whats best, was accurate . This is really scary stuff and really goes to the heart of our First Amendment rights. I dont always agree with the aclu, and you are the past president of the aclu, but i this. With you wholly on we have to be careful, my colleagues, on this. What you deem as inaccurate, i do not deem as inaccurate or other people may not deem. We had a previous briefing on this issue one of the members said well, i think president tweets insight terrorism. Are we going to ban what President Trump says because someone thinks it incites terrorism . This is scary stuff and im very i startedand im glad this, because we need more of a standing up for our rights, whether it is what you believe or i believe. I have a specific question to mr. Slater. In this project veritas video that i watched last night, they allege there are internal google documents, which they put on the it said this is what for example, imagine that a google image query for ceo shows predominantly men. Even if it were a factually accurate representation of the world, it would be algorithmic unfairness. In some cases, it may be appropriate to take no action if the system accurately affects current reality, while in other cases it may be desirable to consider how we might help society reach a more and equitable state via product intervention. What does that mean, mr. Slater . You, congresswoman, for the question. Im not familiar with the specific slide, but when we are designing our products, we are designing for everyone. We have a robust set of guidelines to make sure we are providing relevant, trustworthy information. Raters with a set of around the world and around the country to make sure the guidelines are followed. They are transparent and available for you to read on the web. Personallyt, well i dont think that answered the question at all, but let me go to the next one. You asked mr. Clay higgins a specific example, and so, mr. Slater, he was talking about Prager University. Google it i use for Prager University. On the website, it said conservative ideas are under attack. Youtube does not want young people to hear conservative ideas. Over 10 of our entire library is under restricted mode. Why are you putting Prager University videos about liberty and those type of things on restricted mode . Thank you, congresswoman. I appreciate the information. To my knowledge, prager is a huge Success Story on youtube. We have millions of views, subscribers, and so on to this day. There is a mode viewers can choose to use called restricted mode, where they might restrict sorts of videos they see. That applies to many videos across the board, political towpoints apply to also the daily show and other shows as well. It has been applied to a small percentage of the videos at Prager University. It is a huge Success Story with a huge audience on youtube. Mr. Pickles, regarding twitter, President Trump has said on multiple occasions that he has accused twitter of people having a hard time being deleted from followers. This happens to my husband. He followed donald trump, and all of a sudden he was gone. Can you explain that . What is happening there . I tell you, a lot of conservatives really think there is some conspiracy going on here. I said we would look into the case to make sure there was not an issue there. I can say President Trump is the most followed head of state anywhere in the world, and the most talked about politician anywhere in the world on twitter. Although he did lose some followers when we recently undertook an exercise to clean a compromised accounts, president obama lost far more followers in the same exercise. I think people can look at the way people are seeing President Trumps tweets widely and be reassured the issues are not representative in twitters approach. Mr. Chairman, i ran out of time, but if we have another round i want to hear miss strauss and sears miss straussens views. She has not had a lot of time to speak. The chair recognizes the gentlelady from california for five minutes. Thank you very much, mr. Chairman. I want to talk a little bit about your relationship with Civil Society groups that represent communities targeted by terrorists content, including white supremacist content and specifically referring to content that targets religious minorities, ethnic minorities, immigrants, lgbtq, and others. Help by describing your engagement with Civil Society groups in the u. S. To understand the issues of such content and develop standards for combating this content . Thank you for the question, congreswoman. Anytime we are evolving our policies, we are reaching out to Civil Society groups to not not just in the u. S. , but around the world. I have a team called stakeholder engagement, that is what they do. Our hatee looking at speech policies one of their jobs is to make sure we are talking to people across the spectrum. Different groups that might be affected by the change, people who have different opinions, all of those people are brought into the conversation. We have teams around the world who are speaking to Civil Society groups every day. Doing thatlse we are is important, because twitter is a unique public platform and public conversation, when people challenge hatred and offer a counter narrative or positive narrative, their views can be seen all over the world. Over christchurch, a low, brother, or hello salaam, a man in kenyan who challenged terrorists trying to separate christians. We talked about our policies, but how they can use our platform or to reach people with more messages. I want to make sure you incorporate this one of my concerns is the onus to report the hateful content is placed on the communities that are targeted by the hateful content. That can make social media platforms household places for people in targeted communities. Can you tell me what your companies are doing to alleviate this burden, mr. Slater . I would like to hear from the two of you on that. Speaking of how we enforce our Community Guidelines, including hate speech, we have updated our policies addressing superiority, discrimination, so on. We use a combination of machines and people. Machines scanned for broad previous compare violated contents. We take our responsibility very seriously, the responsibility to protect that source and view it before it has been flagged. Also rely on flags from users as well as flags from trusted flaggers, Civil Society groups and other experts that we work our very closely, both with policies and in flagging those sorts of videos. This is something we have said previously was too much of a burden on our victims. 20 of the abuse we removed one but now it is, 40 . We are now continuing to invest to raise that number further. Can the three of you provide an example where you had Community Engagement . Because of that feedback, there was a policy change that you made . A i would like to share slightly different example, how we write a better policy to prevent that. Our policy crafting on a nonintimate imagery that is not just media shared by next partner, but creep shots, which have been in various countries asking, do you have a policy on creep shots . Beginning from the was written broadly enough to not only capture the original problem, but all the different issues. The second question that you asked about, putting the burden on the victims, we have invested a lot in Artificial Intelligence, so there are certain times Artificial Intelligence has really helped us and areas where it is in its infancy. In hate speech, we have gone from zero detection to now, in the First Quarter of this year, the majority of content we are removing, we are finding using Artificial Intelligence and other technologies. There is a huge way to go, because all of those posts have to be reviewed by real people who can understand the context. Where engagement has less to concrete changes, one thing i would point to is the use of hate speech in imagery. The way that we originally had our policies on hate speech, it was really focused on what people were saying in text. It was only through working with Civil Society partners that we were able to see how we needed to refine those policies to cover images too. As a lot of groups told us it was hard to know exactly how we do hate speech and where we drew the line, that was a contributing factor when, a couple of years ago, we published a detailed version of our Community Standards, where people can see exactly how we define hate speech. The chair recognizes the gentleman from texas for five minutes, mr. Crenshaw. Thank you, mr. Chairman, and thank you for the thoughtful discussion on how you combat terrorism online. There are worthy debates to be had there. Good questions on whether some of this content revive education, so we know about the bad things out there, or if it is radicalizing people. Those are difficult questions and i do not know if we will answer them today. The policies that your social Media Companies follow do not stop there. They do not stop with terrorist , which is propaganda unfortunately exactly what we are talking about. It goes further than that. It goes down the slippery slope about what speech is appropriate for your platform and the standards you employed to decide what is appropriate. This is especially concerning, given the recent news and the leaked emails from google. They show that labeling mainstream conservative media as not these as not the as nazis is part of how you operate. Given that, then schapiro, jordan peterson, and dennis nazis, given that is the premise, what do we do about it . Two of these people are very religious jews, which begs the question, what kind of the people at google have . Hadof these three people family members killed in the holocaust, but you operate off the premise that he is a nazi. It is pretty disturbing, and it gets to the question, do you believe in hate speech . How do you define that . Can you give me a quick definition right now, as it is written down somewhere at google . Can you give me a definition of hate speech . Yes. Hate speech, again as updated in overuidelines, superiority protective groups to justify violence and discrimination, a number of defining characteristics, whether that his race, sexual orientation, veteran status. An example of been schapiro tragerchapiro or dennis engaging in hate speech . We evaluate individual pieces of content, not based on the speaker. Do you believe speech can be violence . Not can you incite violence, that is clearly not protected, but can speech just be violence . Notspeech that is physically calling for violence can be labeled violent and therefore harmful to people . Is that possible . Congressman, im not sure i fully understand the distinction you are drawing. Thingsent to violence, that are encouraging dangerous behavior, those are things that are against our policies. Heres the thing when you call somebody a nazi, you could make the argument you are inciting violence. As a country, we all agree the nazi are bad. An entire country because they were bad. There is a Common Thread in this country that they are bad and evil and they should be destroyed. You are operating off of that premise, and frankly it is a good premise to operate on what you are implying then is that is ok to use violence against them. When you label them one of the most powerful social Media Companies when you label them, one of the most powerful social Media Companies in the nazis, youed them as are inciting violence. That is wholly responsible. It doesnt stop there. The year ago, you made clear that your fact check system is blatantly targeting conservative newspapers. Are you aware of the story i am talking about . I am not aware of the specific story, congressman. From all political viewpoints, we sometimes get questions of this sort. Our fax check labels are generally done algorithmically, based on a markup and follow our policy for the record, they specifically target conservative news media, and oftentimes they have a fact check on their that does not reference the actual article. Google makes sure it is right next to it so they understand that that one is questionable, even though when you read through it, it has nothing to do with it. A few days ago, one of my constituents posted photos on facebook of republican women daring to say there are women for trump. Facebook took down that video with no explanation. Is there a nation for that . An explanation for that . I have not seen the video, but im not sure where we go from here. The practicing of silencing of millionscing of people will create wounds and divisions in this country that we cannot heal from. Is extremely worrisome. You have created amazing platforms that can do Amazing Things with what these companies have created, but if we continue down this path it will tear us apart. You do not have a const and a constitutional obligation to enforce the First Amendment, but i was a you have an obligation to enforce american values, and we should be protecting the First Amendment until the day we die. Will take the prerogative and allow you to make a comment. Thank you for protecting my free speech, mr. Chairman. The point i wanted to make, even if we have content moderation that is enforced with the and peoplenciples are striving to be fair and impartial, it is impossible. These standards are irreducibly subjective. What one persons hate speech is an example given by congressman higgins is somebody elses cherish, loving speech. For example, in european countries, canada, australia, new zealand, which generally share our values, people who are preaching religious texts that they deeply believe in and are preaching out of motivations of love are prosecuted and convicted for engaging in hate against lgbtq people. I obviously happened to disagree with those viewpoints, but i defend their freedom to express those viewpoints. I did read every single word of facebook standards. The more you read them, the more complicated it is. No two facebook enforcers agree and neither of us would either. That means we are entrusting to some other authority the power to make decisions that should reside in each of us as individuals, as to what we choose to see and what we choose not to see, and what we choose to use our own freespeech rights to respond to. Platformsink these have i cannot agree more about the positive potential, but we have to maximize that positive potential through user empowerment tools to radically increase transparency. Im not going to limit your speech, but i will limit your time. [laughter] thank you. Congressman perea for five minutes. Thank you, chairman thompson, and breaking up for holding this critical hearing on very interesting, very important issues. I want to turn to the russian interference in 2016. The Mueller Reports indictments 13 russians, three companies for conspiring to subvert our we sawn system, 2018, indications that again russians were at it again. 2020, former secretary of Homeland Security nielsen, before she was resigned, she resigned. The fact that the russians are at it in 2020 again, there are other countries also trying to affect our elections system. Andhearing your testimony, addressing the issue of the First Amendment does the First Amendment cover fake videos online . We talked a little bit about the pelosi fake video, and maybe you say yes, but i say probably not. I will tell you why that is a damaging video with false content and although you may be , when i hearnies my children tell me i saw it on this platform, the assumption is that it is factual. You 24 hours to take that video down. The others did not take it down. You are essentially a messenger, and when your information shows up online, this population believes you are credible and the information on credible as well. Moving forward, we have another thision happening now, and information continues to be promulgated through your social media, through your companys. Companies. We have a First Amendment issue, but we also have an issue with democracy keeping a cold. Keeping its hold. Any thoughts . We share the focus on making sure we are ready 24 hours is not fast enough. Are we playing defense or offense . Are you reacting or are you being proactive so the next nancy pelosi video is something you can take down faster than 24 hours . Proactive. Eing i do agree there is a lot we can do to get faster. Our approach when there is misinformation is making sure people have the context to understand it. Seeing itwant people in the abstract, we want to inform people and we have to do so quickly. That is something we are Getting Better at i want to ask you something. On the pelosi video, who put it up . It was uploaded by a regular person with a regular account. Somebody at home with some very Smart Software and a Good Platform was able to put together a fake video and put it up . The technique that was used was to slow down the audio, which is what we see a lot of comedy dues with comedy shows do with a lot of politicians what were the consequences to the individual putting out a video of somebody, essentially defaming and hurting her reputation . Congressman, that video and our approach to misinformation, we reduce the distribution and put content from Fact Checkers next to it so people can understand if the content is false or has been manipulated. Mr. Pickles . One of the things we talked about earlier was how to provide context to users are your policy is changing so you can take it down next time, or are you going to let it ride . We are looking at all of our policies in this area. Are you taking it down or going to let it ride . Yes or no . What are you going to see next time you see a video like this . We took it down under our deceptive practices policy. Yourd not to violate freedom of speech here, you think this false video online is constitutionally protected . There is a very strict definition of false speech that is constitutionally unprotected. The Supreme Court has consistently said blatant, outright lies are constitutionally protected unless so let me switch. Will you write policies so outright lies do not have the devastating effect on our voters that they had on the 20 16th election . 2016 election . Like i said, we are looking at the whole issue. Thoughts . Er we too are making sure we have the right approach for the election. We want to raise up authoritative content, promote it, and remove violate of content. This is the reason why President Trump wants to change the libel laws, because it is now legal to lie about politicians and government officials. Maybe we can Work Together on some issues. I yield. We recognize the gentlelady from new jersey for five minutes. Thank you very much. [laughter] , thisyou for being here has been very informative. Let me ask you a really quick question, yes or no. Gifct, your collaboration, secretsping your interfere with sharing standards and working together yes or no . It does not have an effect. I know you use this platform for terrorism. Do you use that platform at all for the sort of hate groups . Not at present, but certainly after new zealand, that tohlighted that we do need broaden our approach to the different issues. So, in my briefing, dog whistling has been mentioned as a certain kind of political messaging strategy that employs coded language to send a message to certain groups that flies under the radar, and its used by white supremacist groups often. And it is rapidly evolving on social media platforms. And it has the space and targeting of racism and other sort of isms we find abhorrent in this country. How do you solve the challenge of moderating dog whistle im happy to start and let others finish. Way youl take you any want to go. Our rules, and one of our rules is about behavior. If you are targeting somebody because of an important characteristic, that is an important factor. Gct has a stream of research, one reason is to have reason is to investigate the latest trends, what we need to learn about those terms, and finally, when we see it is different kinds of extremist twitter, weking for have banned more than 180 groups from our platform. For violent extremism across the spectrum, both in the u. S. And policyy, so we have a framework and also industry sharing. Of this isive a lot about getting to the groups. We do have a hate speech policy but beyond that we know there are groups that are just engaging in bad behavior. We ban not only violent groups but also hate groups and we have removed more than 200 from our platform. Thank you for the question. We do remove hate speech from our platform, and the concerns you motive the concerns you talked about motivated recent changes. We do work to reduce or demote them in frequency and recommendations and so on. Representative did you bring any staff with you today, any employees . Could you please have them stand up . For those that have accompanied miss dickert, could you please stand up. Mr. Nichols, you . Thank you. Mr. Slater . Thank you very much. A couple of things you mentioned, you talked about making sure people are real and that they are american when they advertisement, and they send information to you and you send it back and it proves that you may be pretending to be an american, really living here or having an so that doesnt necessarily guarantee they are legitimate. That is a challenge, is that understandable . If you could clarify the question. Representative its not a question, its a statement. You talked about making sure people who are doing political advertising are not formed nationals, not foreign nationals, that they are american, this discussion about advertisement and it was stated dosomebody there that you verification to make sure the , is anis in america american, and is this false whatever coming from another nation. I said that doesnt necessarily prove that as far as im concerned. That is facebooks approach. Representative my question to you is, are there trigger words that come out of some speech you think should be protective that needs to be taken down because it insights. Give an example from a story and Bloomberg News today that talked about youtubes new policy on the definition of unprotected hate speech. On the first day it went into effect, one of the people that was suppressed was an online activist in the u. K. Against but in condemning antisemitism he was referring insigniaxpression and and he was kicked off. Representative so there is no trigger word . Did you do the definition of hate speech for us earlier . It was hateful conduct under twitter. Representative i think that covers the president of the United States of america, unfortunately. Representative thank you for being here. In the aftermath of the christchurch incident, we sent a letter to you all asking how much you are spending on counterterror screening, and how many people you have allocated to it. We have had interesting conversations over the ensuing basic, and the three problems you have brought to me are that that that oversimplifies is because there is an ai component. Yesterday we get a hearing showing ai alone cannot solve this. You agree with that. Saidecond thing you have to me is that this is a collective action problem. We are all in this together, and we have to give cte. Questions and could you answer yes or no. Cte have any fulltime employees . Does it have a fulltime employee dedicated to it, to run it. We have facebook people fulltime dedicated to cte. Give cteative doesnt have a brickandmortar structure . Now, congressman. We host the database physically at facebook. Now, our collaboration is Companies Working together, we meet in person, have virtual meetings, its about collaboration. Nothing further to add. Brick andtive no mortar structure but i assume you have a google hang out or a facebook hang out. An association located in bethesda, maryland at the adhesive and sealant council. It has five fulltime staff. It has a brick and mortar structure. And you all cannot get your act together enough to dedicate enough resources to put fulltime staff under a building dealing with this problem. I think it speaks to the ways in which we are addressing this with this technocratic, libertarian elitism, and all the people are being killed, all the while there are things thingsthat carl happening that are highly preventable. Are there any ai systems that any of you have that are not available to the gift cte . Oures, depending on how products work. They work differently, so Artificial Intelligence works differently. We worked for some time on this, to come up with a common, technical solution everybody can use. We have that now for videos and we give it for free to Smaller Companies. I want to know if you have any ai. This isnt just ai. That is why we share urls, very lowtech, low fiber. Someone gives you a url for content, you dont need ai to look at that. That is why i think it is a combination solution. Nothing further to add. Rose myative understanding there were no gift ctes made public from each company after the christchurch shooting. I know they were there but they were not established until after the christchurch shooting two months ago. Is this the case . We have a cattle people can use that gets routed to whoever is on call. Representative rose . Is that the case that there were no established pocs . Im asking you to put it on the record, no established pocs at the gift cte until after the christchurch shooting, is that correct . Thats not publicly listed. I draw a distinction between the pocs and companies. We Work Together every day. The point you are getting at is Crisis Response. Representative rose im getting to the fact that you are not taking it seriously because there is no public building, fulltime staff, no public pocs until after the christchurch shooting. Thats what im speaking to. How is anyone supposed to think you take this collective action problem seriously, if you have no one working on it fulltime . This is not Something Technology alone can solve. This is a problem we are blaming the industry for, rightfully so, and there are the smallest of associations in this town and throughout the country that do so much more than you do, and it is insulting that you would not at least apologize for saying that there were no established pocs prior to the christchurch shooting. It was a joke of an association. It remains a joke of an association, and we have got to see this thing dramatically improved. Lastly, if there were terrorist content shown to be on your platforms by a public entity, would you take it down . Why, when the Whistleblower Association reveals facebook is establishing through its ai platform al qaeda Community Groups such as this one, a local business, al qaeda in the Arabian Peninsula with 217 followers. I have it right here on my phone. The most active al qaeda franchise that emerged due to weakened central leadership. It is a militant Islamist Organization primarily dominant in yemen and saudi arabia. Wise this stella . This still why is up . We have every right to think right now you are not taking this seriously, and i dont mean congress, i mean the American People. Soresentative thank you much, mr. Chairman. We have already talked about the and wee at christchurch, also know it was Law Enforcement who notified facebook about what was going on. Miss dickert, can you talk about your work with Law Enforcement and share specific things you are doing to further enhance your ability to work with Law Enforcement to continue to work to prevent incidents like this from happening again . We have a special point of contact for our Law Enforcement engagement team. Worke within our company with Law Enforcement and those relationships are functioning at the reasoned new zealand Law Enforcement were able to reach out to us. And once they did, within minutes representative you dont believe they wouldve been able to reach out to you if you didnt have a Law Enforcement team . Be theirthat responsibility, any Law Enforcement agency thats what was happening leave on your platform, to contact you . We want to make it easy that if they see something, they know exactly where to go. When new zealand reached out we responded within minutes. We also have an online portal that is manned 24 hours a day, so there is any kind of emergency, we are on it. If we see an imminent risk of harm we proactively reach out. Anytime there is a terror attack or Mass Violence in the world, we proactively reach out to Law Enforcement to make sure that if there are accounts we should know about or things about victims, any action we should take, we are on it immediately. Youesentative mr. Pickles, said we will not solve the problems by moving content alone. Most companies do a pretty good job in terms of combating or fighting Child Exploitation or pornography. I would like to hear you talk about your efforts to combat terrorism and share some of the similarities, because we cant solve the problems just by taking down the content alone. If you could show some similarities in terms of your efforts to combat terrorism and child pornography. I know you put a lot of resources in combating child resources, but can you talk about similarities in the two goals . We have similarities and differences. In the celebrity in the salem and the similarity space, we can proactively detect an image and stop it from being distributed and work with Law Enforcement to bring that person , we work with the National Center for missing and exploited children done Law Enforcement around the world, so that process of discovering content and working with Law Enforcement is seamless. Childularly for exploitation but also valid threats. Representative what about combating terrorism . Removing the content is our response, but there is a Law Enforcement response as well which holds people to account and potentially prosecutes them for criminal offenses. Working in tandem between the two is important. We have a similar industry body that shares information and we work with governments to share Threat Intelligence and analysis of trends so we can stay ahead biggestctors, but the area of similarity is the bad actors never stay the same, they are constantly evolving, so we have to constantly be looking for the next opportunity to improve. Representative we talked about, the chairman asked a question about video of the speaker and why some removed it and some of you did not. Mr. Slater, i was pleased to hear your answer, which was, you look for deceptive practices, if it is deceptive, you removed it. And miss strossen, i believe you said the social media platformss freespeech right is their ability to decide what is posted and not posted. It is that simple. They can decide what is posted and not posted. Mr. Slater, if you could talk a little bit about your process, it was deceptive, you took it down. Slater we have important guidelines and one is about deceptive practices. We review content thoroughly and make sure it is violating or whether it sipped whether it fits into education, documentary, or so forth, we do that on an individualized basis to see if the context has been met. We present those guidelines publicly on our website for anyone to read. Representative is google an American Company . We are headquartered in california, yes. Representative are you loyal to the american republic, it is that something you think about . We build products for everyone. We have offices across this country. Weve invested heavily in this country under proud to be founded in this country. so if you found out a terrorist organization was using google products, would you stop them . We have a policy of addressing content from designated terrorist organizations to prohibit it and make sure it is taken down. Taylor im not talking about content. If you found a terrorist organization was using gmail within that organization, would you stop that . Appropriate we will work with Law Enforcement to provide information about relevant threats, behavior and so on. And we will respond valid requests for information from Law Enforcement. Taylor i ame asking, if a terrorist Organization Uses a google product and you know about it, do you allow that to continue or do you have a policy . Under appropriate we would terminate a user and provide information to Law Enforcement. Taylor youre answer is opaque. Im trying to figure it out. If a terrorist organization is using a google product, do you have a policy about what to do about that . Im attempting to articulate that policy. I would be happy to come back to you with further information if it is unclear. Listen to the answer about referring it to Law Enforcement. Thats an appropriate response. If there is a suspicion criminal activity is afoot, you would want to refer it to Law Enforcement and Law Enforcement make the call on that. Maybe to help you would little bit with that particular portion of it. Taylor so the Islamic Republic of iran is the largest state sponsor of terrorism in the world, pieces of the Islamic Republic are terror organizations, do you have a ban on that terrorist organization and their ability to use google products . We have prohibitions on designated terrorist organizations using products and putting up content and so on. Taylor so you seek to ban terrorist organizations from using google products . Im not trying to put words in your mouth, just trying to understand your position . And im not just asking about content, im asking about services, gmail, i calendar, a host of Different Services people can use. Im trying to ask about services, not content. The focus of the hearing is about content, but im asking about the actual services. If we were to have knowledge, as my colleagues have said, bad actors are constantly changing their approaches, trying to game the system and so on, but we do everything we can to prohibit illegal behavior and those organizations. Representative taylor do you have screens set up to figure out who the users are, pierce the veil, so to speak, and anonymous account to figure out who that might be, where it is sourcing from . Is that part of how you operate . Absolutely. ThreatAutomated Systems, analysis to ferret out behaviors that may be indicative been that way. Thankentative taylor you. I appreciate your answers and appreciate the panel for being here. This is an important topic. Representative titus we have heard about incidents but havent mentioned what occurred in my district of las vegas. This was the deadliest shooting in the United States in modern history, october 1, 20 17, gunmen opened fire on a music concert, a festival, and after that attack there was a large volume of hoaxes, conspiracy theories, misinformation that popped up across your platforms, including about the miss identity of the gunmen, the misidentity of the gunman, and some even call that a false flag. On facebook orge loved ones could check in to see if they were safe and what have you, there were all kinds of things that popped up like that going donations, false information climbing shooter was associated with some antitrump army, just a lot of myths where people were trying to make content. I wonder if you have any specific policy or protocols or algorithms to deal with the immediate aftermath of a mass shooting like this . All three of you. Thank you, congresswoman. The Las Vegas Attack was a horrible tragedy. We think we have improved since then, but i want to explain what our policies were then, and how we have gotten better. With the Las Vegas Attack, we were moved any information praising the attacker shooter and took steps to protect accounts of the victims. Sometimes in the aftermath that these things we see people try to hack into accounts or do other things like that. So we took steps to protect victims and worked very closely with Law Enforcement. Since then, one area where we have gotten better is Crisis Response in the wake of a violent tragedy. With Christ Church, you had companies at the table and others communicating realtime, sharing with one another urls, new versions of the video of the and it wasso forth, literally a realtime for the first 24 hours operation. In that 24 hours on facebook alone, we were able to stop 1. 2 million versions of the video from hitting our site. We have gotten better technically but this is an area where we will continue to invest. One of the challenges we have differentace is, doctors change behavior to get around our rules. One thing we saw after Christ Church which was concerning was people uploading content to the event had happened, that the suggesting that because Companies Like ours were removing content at scale, people were calling that censorship, so there were people uploading content to prove the attack happened. That is a challenge we havent had a deal with before and something we are very mindful of. We need to figure out the best way to combat that challenge. We have policies against the abuse and harassment of survivors and victims and their families that someone is targeting, someone who has been a victim or a survivor and is denying the event took place or is harassing them because facts are like political ideology, we would take action for the harassment in that space. The question of how we work with organizations to spread a positive message going forward, if there are groups in your communities affected by this and working with the victims, to show the positivity of your community, then we would be keen to work with those organizations wherever they are in the u. S. To spread that message of positivity. This is of utmost seriousness, it was a tragic event for our country, for society. Personally as someone who lived in las vegas and new zealand, both of these events i hold it deeply in my heart. We take a threefold approach to misinformation and other conduct you were talking about. To raise uputube authoritative sources of misinformation during a breaking news event to make sure authoritative sources up pace those who might wish to misinform. Denialsstrike, remove, welldocumented events or people spreading hate speech toward survivors of that event, and we also seek to reduce exposure to content that is harmful misinformation, including conspiracies and the like. These peoplee have been victimized in the worst sort of way already. You hate to see them become victims of something over the internet. One thing we heard from Law Enforcement relates to what you were saying mr. Slater, using algorithms to elevate posts from Law Enforcement so people seeking help go to those first, as opposed to other information that comes in randomly. And you are working with Law Enforcement, maybe you could consider that. That is something we can explore with Law Enforcement. We try to make sure people have Accurate Information about information after attacks. After las vegas we learned from that and are in a better place today. Representative titus i would appreciated if you would look into that. I think Law Enforcement would too. To ourntative representatives from facebook, google, twitter, thank you for being here today, thank you for bravely appearing for a closed briefing we had earlier this we seek to continue to examine this complex issue of balancing First Amendment rights against making sure content on social media does not promote terrorist activity. Professor, you were not here during the closed briefing. I want to ask a couple of questions to you. During your testimony you highlight the potential dangers associated with content done byon, even when private companies in accordance with their First Amendment rights. You make a case for social Media Companies to provide free speech protections to users. Counterhow to potentially adverse impact of terror content, and this information is certainly a complex problem. While restricting such expressions might appear to be a clear couple simple solution, it is neither end moreover it is wrong. An 11 pagelusion of report you provided, but could you briefly summarize that for the purpose of this hearing . The problem is the inherent subjectivity of the standards. No matter how much you articulate them. It is wonderful facebook and other companies have really recently shared their standards with us. To can see it is impossible apply them consistently to any particular content. Reasonable people will disagree, the concept of hate, the concept of terror, the concept of mentz information, strongly debated. One persons fake news is somebody elses terraced truth. A lot of attention has been given to the reports about discrimination against conservative viewpoints in how these policies are implemented. I want to point out there also have been a lot of complaints from progressives and civil rights activists and social rights activists and social justice activists, complaining that their speech is being perp their speech is being suppressed. What i am saying is, no matter how good the intentions are, no matter who is enforcing it, Government Authority or private company, there is going to be at best an unpredictable and arbitrary enforcement, and at worst, discriminatory enforcement. As an expert in the First Amendment, do you feel content moderation by social Media Companies has gone too far . They have a First Amendment right. That is important to stress. But given the norms power of these platforms given the enormous power of these platforms, as the Supreme Court said in a unanimous decision two years ago, that this is now the most Important Forum for the exchange of information and ideas, including with elected officials, those who should be accountable to we the people. So if we do not have a free and unfettered exchange of ideas on these platforms, for all practical purposes we dont have it. That is a threat to our democratic republic as well as it is to individual liberty. There was a lot these platforms can do in terms of user empowerment so we can make our own choices about what to see and not to see, and also information that will help us evaluate the credibility of the information. Do you have any recommendations you feel would individuals First Amendment rights versus trying to protect social media from terrorists being able to use it as a platform that you would recommend, first to the social Media Companies, and then are there any recommendations you would have on this body, thinks congress should consider that would help us as we navigate this difficult situation . You areesss oversight exercising vigorously is extremely important. I think encouraging but not requiring companies to be respectable of all concerns, Human Rights Concerns of fairness and transparency and due process as well as free speech, but also concerns about potential terrorism endangers speech, i think the United StatesSupreme Court and International Human rights norms, which largely overlap, have gotten it right. They restrict discretion to enforce standards by insisting before speech can be punished or suppressed, there has to be a specific and direct causal connection between the speech in that particular context which causes an imminent danger. We can never look at words alone in isolation, to get back to the question i was asked by the congresswoman, because you have to look at context. Therea particular context is a true threat, there is intentional incitement of violence, there is Material Support of terrorism, there is defamatory statements, there is fraudulent statements, all of that can be punished by the government and therefore, those standards should be enforced by social media as well. That is the right way to strike a balance here. Representative im going to have a different approach than my colleagues. Was a member of the city council in kansas city and the klan planned a big margin slope park, and i fought against aclu supported the and that if i, passed an ordinance that would challenge it in court. Not mad, im not upset, im a former board member of the think free speech has to be practiced, even when it is unpleasant. Ways i feel sorry for you, not enough to let you off without beating up on you a little bit, but im afraid for our country. Wheree entered an age of people respect an alternative painfuld it is just so to me to watch it, and i dont think im watching it in isolation, and alternative truth where people Say Something that is not true and continued to say. It doesnt matter. I saw last night where where the president said barack obama started this border policy and i am correcting it. And this is what one should consider, what one of the tv networks did is put up people making statements about what was happening. They showed Jeff Sessions when he first announced the separation policy at the border, and the problem is that churchill said a lie can travel halfway around the world before the truth puts on its shoes. And that is true. If we started 20th century new bible that should be one of the scriptures, because it is a fact. And the truth cannot always be uncontaminated with sprinkles of deceit. So you guys have a tough job. I dont want to make it seem like it is something you can do easily. Our system of government, and even beyond that our moral depend a lot more, and i didnt realize this until recently, i spent three and a half years in the seminary, but we depend significantly on shame , but there are some things that laws cant touch. And our society functions on shame. So when shame is dismembered, im not sure what else we have left. What i would like for you to react to and maybe consider is, instead of taking something down in some instances, why not just put up the truth next to it . The truth. Im not talking about somebody elses response. Im talking about the truth. Where, the video, i wish i had brought it to you, they say, here is the lie and here is the truth. Anybody else . This is a very important issue, congressman. Absolutely. What we are doing is twofold regarding harmful information. One is where there is a video that, lets say, the moon landing didnt happen. Or the earth is flat. The video may be up, but you will see a box underneath that says, here is a link to the Wikipedia Page about the moon landing or the encyclopaedia , britannica, where you can learn more. Representative you do that now . We do that today, yes, sir. The other thing we do is reduce the frequency of recommendations to information that might be harmful misinformation such as , those sorts of conspiracies. You write about interplay on what is on social Media Companies, the news media, what is on tv and how that cycle of , information works together. Its a critical part of solving this. Twitter, because we are a public platform, very quickly people are able to challenge, expose, say that is not true, here is the evidence, heres the data, theres something incredibly important about these conversations taking place in public. I think that is something that, as we move into the information century, that we need to bear in mind. We actually, if there is misinformation a thirdparty Fact Checking organization has debunked, and we work with 45 of these organizations worldwide, they all meet objective critique, they are all certified, we take articles from those Fact Checkers and put it right next to the content so people have that context. And if you share that content, we say this content has been rated false by a fact checker and we link to it. Similarly, when it comes to misinformation about things like vaccines, we are working with organizations like the cdc and World Health Organization to get content from them that we can put next to vaccinerelated misinformation on our site. We think this is a really important approach another thing. We think this is a really important approach. Another thing we are trying to do is empower those who have the best voices to reach the right audience. We invest heavily in promoting counter speech and truthful speech. Chairman before we close, i would like to insert into the record a number of documents. The first is several letters addressed toders facebook as well as twitter and youtube about hateful content on their platforms. The second is a joint report from the center of European Studies on the counter extremism project. The third is a statement for the record from the antidefamation league. The fourth are copies of Community Standards for facebook, twitter and google. Without objection. So ordered. I thank the witnesses for their valuable testimony, and members for their questions. The members of the committee may have additional questions for the witnesses and we ask that you respond expeditiously, in writing, to those questions. The other point i would like to make for facebook, you are 30 hours late with your testimony. And staff took note of it, and company your size, that was just not acceptable for the committee. So i want the record to reflect that. Without objection the committee , record will be kept open for 10 days. Hearing no further business, the committee stands adjourned. [gavel striking block] [captions Copyright National cable satellite corp. 2019] [captioning performed by the national captioning institute, which is responsible for its caption content and accuracy. Visit ncicap. Org] [inaudible] sunday at 9 00 eastern on afterwards, and author examines the rise in violence committed by young men around the world in . S book, why young men when we talk about other forms of violence, like when a white supremacist attacks a mosque, for example, the narrative goes back to rhetoric, thats why someone hates immigrants on muslims and thats why they picked up a gun and attacked a mosque. A think rhetoric is an important piece of the problem in understanding these things, but if that is the case, than a billion industry built on talking about black men and guns, is that not relevant . Watch afterwards sunday at 9 00 p. M. Eastern on book tv on cspan2. Weekend oniday American History tv, tonight at 10 00 eastern on railamerica filmel america, the 1970 celebrating fourth of july featuring bob hope and the reverend billy graham. We have never hidden our problems at home, with our freedom of press and open communication system. We dont sweep our sins under the rug. If racial tension exists, the whole world knows about it. Instead of an iron curtain, we have a picture window. 6 00 p. M. Sunday at eastern on american artifacts, a living history hobbyist portrays a soviet cavalry officer and discusses the soviet unions role in world war ii. One month before dday we had been occupying 65 of the best german troops fighting us. If we hadnt done that, if they failed at moscow, or stalingrad, all those troops could well have been on the normandy beaches, and it could have been a different outcome. Is story that has to be told that this is a significant contribution to winning the war. Narrator watch on American History tv on cspan3. All three of our Networks FeatureLive Programming monday. Cspan will have a forum on economic and financial sanctions against north korea, hosted by the u. S. Institute of peace. That starts at 10 00 a. M. Eastern. A christians united for speechesrum featuring by secretary of state mike pompeo and National Security advisor john bolton. Earlier in the day, cspan2 covers that same form starting at 11 00, with remarks by Vice President pence. In the senate convenes in the afternoon to work on a nomination to the ninth Circuit Court of appeals. Cspan3 has a Wilson Center event, reviewing the first year of mexican president obra dors administration. Later, the release of a report report onn authoritarian governments and National Security. Afternoon, a Senate Judiciary subcommittee will hold a hearing on term limits and two of the