Statement. Online content moderation has largely enabled the internet experience that we know today. Whether looking up restaurant reviews on yelp, catching up on s l on youtuber checking on a friend or loved one on social media. These are experiences that we come to know and rely on. And the plot form that we go to to do these things have been enabled the User Generated Content as well as the ability of these companies to moderate that content in creek communities. Section 230 of the Communications Decency act has enabled that ecosystem to does all. By giving Online Companies the ability to moderate content without equating them to the publisher or speaker of the content, we have enabled the creation of Massive Online Community of millions and billions of people that come together and interact. Today, this committee will be examining that world that section 230 has enabled, the good and the bad. I would like to think the witnesses for appearing before us today. Each of you represent important perspectives related to the content moderation in the online ecosystem. Many of you bring up complex concerns in your testimony. I agree this is a complex issue. I know that some of you have argued that congress should amend 232 address things such as online criminal activity in this information and hate speech and i agree these are serious issues. Too many other communities in my hometown of pittsburgh has seen what unchecked hate can lead to. Almost a year ago our community suffered as a nations history. The shooter did so after posting an antisemitic remark for finally posting that he was going in. A similar attack occurred in new zealand and the gunman streamed his despicable acts on social media sites. While some of these sites moved for the spread of the content many did not move fast enough. The algorithms meant the sports highlights and celebrity to go viral and helped him pull fight and in hi heinous act. In 2016 we sell similar issues when foreign adversaries use the power of these platforms against us for the information in the division and distrust in our leaders and institutions. Clearly, we all need to do better. Which i strongly encourage the witnesses before us that represent these Online Platforms and other major platforms to stop it. The other witnesses on the panel bring up serious concern with the kind of content available on your platforms and the impact that content is having on society. As they point out, some of those impacts are very disturbing. That being said, section 230 does not protect the largest platforms or the immersed rent websites. It enables common sections on individual blogs and people to leave honest and open reviews and free and open discussions about controversial topics. The kind of ecosystem that has been enabled by more open online discussions has enriched our lives and our democracy. The ability of individuals to have voices heard particularly marginalized communities and not the understated. The ability of people to post content that speaks truth to power has created Political Movement in this country and others that have changed the world we live in. We all need to recognize the incredible power this technology has for good. As well as the risks that we face when its misuse. I want to thank you again for being here and i look forward for our discussion. I would like to yield the balance of my time to my good friend. Thank you, mr. Chairman i wanted think the witnesses for being here today. In april 2018 Mark Zuckerberg came before congress and said it was my mistake and im sorry. And the influence of 2016 president ial election. Fastforward 555 days, i fear mr. Zuckerberg may have not learned from his mistakes. Recent developments confirm we have all feared, facebook will continue to allow lies once again making the online ecosystem Fertile Ground for election interference in 2020. The decision to remove lately falls and should not be a difficult one choice between hate speech, online bullies in effect driven debate should be easy. If facebook does not want to play the truth and political speech then they should get out of the game. I hope this hearing produces a robust discussion because we need it now or than ever. Mr. Chairman i yield back. Thank you latina recognizes the Ranking Member for the subcommittee for five minutes for his Opening Statement. Thank you mr. Chairman for todays hearing and thank you tour witnesses to appearing before us. Welcome todays hearing on content moderation and review of section 230 of the decency act. This hearing is a continuation of a serious discussion that we began last session on how congress should examine the law and ensure accountability and transparency for the hundreds of men in american using the internet today. We will also witnesses for a balanced group of stakeholders for section 230. They range from large to Small Companies as well as academics. Let me be clear a good lead to a slippery slope is a death by a thousand cuts that some would argue would end up in the internet industry if it was not appealed. Before we discuss whether or not congress should make modifications to the law we should understand how we got to this point in his support to look at 230 and when it was written. At the time the decency portion of telecom act of 1986 included other prohibitions on objectionable or the content of the internet. Provisions that were written to target obscene content were obstructed by the Supreme Court it was intended the Interactive Computer Services to proactively take down offensive content. To help control of the portals of our computer so it comes in and what our children see. It is unfortunate that such broad interpretation of section 230 with the broad liability platforms having to demonstrate that they are doing, Everything Possible instead of encouraging use numerous platforms have hidden behind the shield and used tools to avoid litigation without having to take responsibility. Not only our Good Samaritan are being selective in taking down harmful or illegal activities but section 230 has interpreted so broadly the head samaritans can skate by without accountability. That is not to say all platforms were afforded by congress, many do agree they can do many great things many is a bigger platforms for billions and not to the account annually. Often times these are the exception, not the rule. Today will dig deeper to learn how platforms aside to remove content whether its with the tools provided by section 230 or your own self constructed terms of service. Under either authority we should encourage forstmann to continue. Mr. Chairman i thank you for holding this important hearing so we can have an open discussion on intent of 230 and if we should reevaluate the law. We must ensure the platforms are hold reasonably accountable for activity on the platform without drastically affecting the innovative start. With that i yield back the baltimore time. The gentleman yields back. This is a joint hearing between her committee and the committee on Consumer Protection and congress i would like to recognize the chair of the committee for five minutes. Thank you, mr. Chairman and good morning and all the panels for being here. The internet certainly has improved our lives in many, many ways in enabled americans to more actively participate in society, education and commerce. Section 230 of the Communications Decency act has been at the heart of the United States internet policy for over 20 years. Many say this law allows speech to force allowing the internet to grow into what it is today. In the early days of the internet it was intended to encourage Online Platforms to moderate User Generated Content to remove offensive, dangerous or illegal content. The internet has come a long ways since the law was first enacted. The amount and sophistication of User Postings has increased exponentially. Importantly, the number of americans who report experiencing extremism and extreme Online Harassment which include Sexual Harassment, stalking, bullying and threats of violence have one up over the last two years, 37 of users say they experienced that this year. Likewise extremist extremism, hate speech, election interference and other problematic content is plural freighting. The spread of such content is problematic that is for sure. And actually it causes real harm. That multibillion Dollar Companies like facebook, google and twitter cant or wont fix. If this were not enough, cause for concern, more forprofit businesses are attempting to use section 230 as a liability shield and actively they have nothing to do with thirdparty or content moderation policy. In recent Washington Post article executives seem to open the door to claiming lost immunity from labor, criminal in local profit liability based on section 230. This would represent a major unraveling of 200 years of social contract, Community Governance and congressional intent. Although also an issue as the federal trade Commission Section five authority on unfair or deceptive practices, the ftc versus section five cases, website generated content but the terms of Service Violations for thirdparty content may also be precluded by the 230 immunity. I wanted to talk about injecting 230 into trade agreements. It seems that we have already seen that in the japan trade agreement and there is a real issue to include that now in mexico, canada u. S. Trade agreement. There is no place for that. I think that the laws and the other countries do not really accommodate what the United States has done about 230. The other thing we are having a discussion, an important conversation about 230. In the midst of the conversation because of all the new developments, i think it is just inappropriate right now are this moment to insert this Liability Protection into trade agreements and as a member of the working group that is helping to negotiate the agreement, i am pushing hard to make sure that it just is not there. I dont think we need to have any adjustment to 230, issue just not be in trade agreements. So all of the issues that we are talking about today indicate that there may be a larger problem at 230 no longer is achieving the goal of encouraging platforms to protect their users and today, i hope that we can discuss holistic solutions, not talking about eliminating 230 by having a new look at that in the light of the many changes that we are seeing into the world right now. We want to i look forward to hearing from our witnesses and how it can be made even better. I yield back. The china recognizes the Ranking Member of the committee ms. Roger. Good morning. Welcome to todays joint hearing on online content management, as republican leader on the consumer subcommittee by priority to protect consumers while preserving the ability for Small Businesses and startups to intervene. In that spirit we are discussing Online Platforms in section 230 of the communication decency act. In the early days of the internet, two companies were sued for content, posted on the website by users. One company sought to moderate content on the platform and the other did not. In deciding these cases the court found the company that did not make any content decisions was immune from liability. But the company that moderated content was not. It was after these decisions that congress created section 230. Section 230 is intended to protect Interactive Computer Services from being sued over what users post while allowing them to moderate content that may be harmful illicit or illegal. This Liability Protection plated critical and Important Role in how we regulate the internet. To allow Small Businesses and integrators to thrive online without the fear of regrowth loss are looking to make a quick buck. Section 230 is also largely misunderstood. Congress never intended to provide immunity only to websites who are neutral. Congress never wanted platforms to simply be neutral conduits but in fact wanted platforms to moderate content. The Liability Protection extended to allow platforms to make good faith efforts to moderate material that is obscene, loose, excessively violent or harassing. There is supposed to be a balance to section 230. Small Internet Companies enjoy a safe harbor to innovate and force online while also Incentivizing Companies to keep the internet clear of offensive and violent content by empowering these platforms to act and clean up their own site. The internet revie revolutionize freedom of speech by providing a platform for every american to have their voice heard and to access an infinite amount of information at their fingertips. Medium and other online blogs provided a platform for anyone to write. Wikipedia provides free indepth information on almost any topic you can imagine through mostly user generated and moderated content. Companies that started in dorm rooms and garages are now global powerhouses. We take great pride in being the Global Leader in tech and innovation. But while some of our Biggest Companies have grown, have they mature . Today is often difficult to go online without seeing harmful, disgusting or illegal content. To be clear i fully support free speech which society benefits from open dialogue and Free Expression online. I know there has been calls for Big Government to mandate or dictate free speech or ensure fairness online and is coming from both sides of the aisle. I share similar concerns that others have expressed and are driving some of the policy proposals, i do not believe these are consistent with the First Amendment. Republicans successfully fought to repeal the fcc fairness doctrine for broadcast regulation during the 1980s. I strongly caution against advocating for similar doctrine online. It should not be the fcc, ftc or any Government Agency job to moderate freespeech online. It instead we should continue to provide oversight of big tech in their use of section 230 and encourage structure of content. This is very. How do we ensure they are responsibly earning their Liability Protection. We Want Companies to benefit not only from the shield but also use the sword congress afforded them to rid their sites of harmful content. I understand its a delicate issue and certainly we renew them. I want to be very clear im not forgetting section 230, its for consumers and entities in the internet echo system. Misguided and hasty attempts to amend or repeal section 230 for biased or other reasons could have unintended consequences were free speech and ability for Small Businesses to provide new and innovative services. At the same time it is clear we reached a point where its income it upon us as policymakers have a serious and thoughtful discussion about achieving the balance on section 230. I thank you for the time and i yield back. The chair recognizes chairman of the committee for five minutes for his Opening Statement. Thank you chairman. The intermittent is one of the single greatest Human Innovations that expresses community and Economic Opportunity with trains of dollars of exchanged online every year. One of the principal laws that pave the way for the internet to flourish is section 230 of the communication decency act which is part of the Telecommunications Act of 1996. We enacted the section to give platforms the ability to moderate their sites and protect consumers without excessive risk of litigation and to be clear section 230 has been an incredible success. In the 20 years since section 230 was law the internet is more complex and sophisticated the 1986 the Global Internet reached 36 million users, less than 1 of the world population. Only one in four americans reported going online every day. Compare that to now when all of us are online almost every hour that we are not sleeping in earlier this year the internet passed 4. 39 billion users worldwide. Here in the u. S. Theres about 230 million smart phones that provide americans access to Online Platforms. The internet is a central part of our economic fabric in a way that we cannot have dreamed up when we passed the Telecommunications Act. With that complexity and growth we have seen the darker side of the internet growth. Online radicalization has spread leading to Mass Shootings in the schools, churches and movie theaters, International Terrorists are using the internet to groom recruits. Platforms have been used for the illegal sale of drugs including those that spark the Opioid Epidemic. Foreign governments and fraudsters to polluted campaigns using new Technology Like deep fakes designed for civil unrest and disrupt democratic elections. There are constant attacks against women, people of color and other minority groups. Perhaps most despicable of all is the horrendous sexual exportation of children online. In 1998 there were 3000 reports of material depicted in the children online. Last year 45 million video reports were made. While platforms are now better detecting and removing this material recent reporting shows Law Enforcement officers are overwhelmed by the crisis. These are issues that we cannot ignore in Tech Companies need to step up with new tools to address the Serious Problems in each of these issues demonstrate how online content moderation have not stay true to the values underlying section 230 and has not kept pace at the increasing importance of the goebel internet. There is no easy solution to keep this content off the internet as policymakers we have are ideas of how we might tackle the symptoms, the content moderation online and also protect free speech but we must seek to fully understand the breadth and depth of the internet today, how it is changed and how it can be made better and we have to be thoughtful, careful and bipartisan in our approach. Its with that in mind i was disappointed that investor lighthizer, u. S. Trade representative refused to testify today, the u. S. Has included language in which a section 230 in the United States, mexico and Canada Agreement in the u. S. Japan trade agreement and Ranking Member and i wrote to the ambassador in august raising concern about why the ustr has included the language in the trade deal as we debate them across the nation and i was hoping to hear his perspective on why he believes that was appropriate. Including provisions and trade agreement that are controversial to democrats and republican is not the way to get support from congress obviously. Hopefully investor will be more responsive to bipartisan request in the future and with that mr. Chairman i will yield back. The gentleman yields back. The chair would like to remind members pursuant to the Committee Rules, all members written Opening Statement shall be made part of the record. Can mine be made part of that customer. I apologize the chair yield to my good friend and Ranking Member. Times have changed. [laughter] thank you, mr. Chairman and i want to welcome our witnesses, thank you for being here its important work and ill tell you on the offset we have another subcommittee meeting upstairs to all be bouncing in between but i have your testimony and look forward to your comments. Its without a question of experts in this field. Were blessed to have you here. Last congress we held significant hearing that jumpstarted the state of online protections as well as the legal basis of the modern internet ecosystem and of course the future of content moderation and algorithms determine which of what we see online, thats an issue the constituents want to know more about. Today we will undertake a deeper review of section 230 of the Communications Decency act portion of the 1996 telecommunication act. In august of this year chairman and i raise the issue of the appearance of export language nearing section 230 and trade agreement. We did that in the letter to the u. S. Trade representative robert lighthizer. We express concerns of the internet policy being taken out of the content of its intent and in the future the United States trade representative should consult our committee in advancing negotiating on the very issues. Rather than cherry picking just a portion. I want to go back to the trade case. I thought the letter to the ambassadorwas going to send the right message. Dont try to blow up us mca. I voted for everytrade agreement, and i am a big free trader but were getting blown off on this and im tired of it. Then wefound out its in the japan agreement. Clearly theyre not listening to our committee or us so we are serious about this matter and ive heard from the usd are and this is a real problem so take note. If we refer to section 230 as the words that created the internet as has been popularized by some were already missing the mark since by our word count which you can use software to figure out, that includes the Good Samaritan obligation so we should Start Talking more about that section as the 83 words that can preserve the internet area all the sections and provisions should be taken together and not a part and many of our concerns could be addressed if Companies Just enforced their terms of service but in better context i believe a History Lesson is in order. Todays internet looks different than compuserve when messages dominated the internet in the 90s. While the internet is more dynamic and content rich there were problems in its infancy managing the vast amounts of speech online. Chris cox, former member of the legislation along with this committee did out on the house floor during debate on his amendment, no matter how big the army of bureaucrats, its not going to protect my kids because i dont think the federal government will get there in time so congress recognized then as we did now we need companies to step up the to the plate and curb illegal content from their platforms. The internet is not something to be managed by government. The act bestowed on providers and users the ability to go after the legal and human content without being held liable. While the law was intended to empower we have seen social media platform flow to clean up while being quick to use immunity for such content. In some cases internet platforms and shirk the responsibility for the content on their platforms. The liability shield in place through common law as obscured the central bargain that was struck and that is the internet platforms with User Generated Content are protected in exchange for the ability to make good faith efforts to moderate harmful content so let me repeat for those that want to be included in the Interactive Computer Services definition. And force your own terms of service. I look forward to a discussion on the shaded speech from illegal content, how we should think of cda 230 protections versus large ones and various elements of the ecosystem shape what consumers see or dont see area thank you for having this hearing and i look forward to getting all the feedback from the witnesses but i have to vote for the other hearing clicks the administration doesnt listen to you guys either ill let my statement speak for itself clearly, we will find out if there listening or not. I will reiterate that members pursuant to the Committee Rules all members written statements will be made part of the record. We want to introduce our witnesses for todays hearing, mister Steve Hoffman, ceo of reddit, welcome. Miss danielle citron, professor of law at Boston University school of law. Doctor mcsherry, legal director of the Electronic Frontier foundation, welcome. Miss gretchen peters, executive director of the alliance to counter prime online. Miss catherine oyama, head of International Policy property for google and doctor farid, professor at theuniversity of california at berkeley. We want to thank you for joining us today. We look forward to your testimony. At this time the chair will recognize each witness for five minutes to provide Opening Statements. For we begin i like to explain our lighting system area in front of you is a series of lights, the light will initially be green. A light will turn yellow when you have one minute remaining. Please wrapup your testimony at that point. When the light turns red, we cut your microphone off. We dont, but try to Mission Fishing finished before then. Chairpersons, Ranking Members, members of the committee, thank you for inviting me. I name is Steve Hoffman and im the cofounder and ceo of reddit and im here to share why 230 is critical to the open internet. It moderates content in a fundamentally different way, we empower communities that rely on 230. Changes to 230 close existential threats not just us but thousands of startups across the country. It would destroy what competition remains in our industry. A College Roommate and i started reddit in 2005 as a forum to find news and interesting content. Since then its grown into a vast Community Driven site where people find not just news and a new laugh but perspective and a sense of belonging. Reddit is communities that are created and moderated by our users. A model thats taken years to develop with Lessons Learned along theway. I left the company in 2009. For a time, reddit lurched from crisis to crisis over questions of moderation that were discussing today. 2015 i came back because i realized the vast majority of our communities were providing an invaluable experience and reddit needed better moderation. The late reddit handles moderation is unique in the industry, a model akin to our democracy where everyone follows rules to self organize andultimately share some responsibility for how the platform works. First we have our content policy fundamental rules that everyone on reddit must follow. Think of these as our federal laws. We employ a group collectively known as the antievil team to enforce these policies. Below that each Community Creates their own rules. These rules written by our volunteer moderators are carried through the unique needs of their communities and tend to be more specific and complex. Self moderation our users do is the most scalable solution to the challenges of moderating content. Individual users play a crucial role as well. They can vote up or down on any piece of content and reportedto our antievil team. Through this system , users can accept or reject any piece of content thus turning every user into a moderator. The system isnt perfect. Its possible to find things on reddit to break the rules but effectiveness has improved. Analysis has shown our approach to be largely effective in curbing bad behavior and when we investigated russian attempts at manipulating our platform we found tried, us than one percent made it past routine defenses of arcing , Community Moderation and down votes from everyday users. We constantly evolve our content policy and since my return wevemade a series of updates addressing violent content, pornography, controlled goods and harassment. These are just a few of the ways weve worked moderate in good faith which brings us to the content of what it would look like without 230. For starters, we are forced to defend against anyone with enough money to bankroll a lawsuit. Its worth noting the cases most commonly dismissed under 230 are regarding defamation. Its an open platform where people are allowed to voice opinions would be a prime target for these, enabling censorship through litigation. Even targeted in its february 2, 1930 will create a Regulatory Burden on the industry, benefiting the Largest Companies by placing a significant cost on smaller competitors. We have 500 employees and a large user base, more than enough to be considered a Large Company and we are an underdog compared to our nearest competitors where our Public Company attend to be 100 times our size but we recognize that theres truly harmful material on the internet and we are committed to fightingit. Its important tounderstand rather than helping , keeping narrow changes and undermine the power of community and horrible mobile. It could be Opioid Epidemics, raising discussions on 230. We have many communities where users modeling can find support tohelp them on their way to sobriety. Without carveout on this area, they become too risky, forcing us to close them down. This would be a disservice to the people struggling that this is exactly the type of decisionthat restrictions on 231 forced on us. Section 230 is a uniquely american law. Its a balanced approach that has allowed the internet to flourish while also incentivizing good faith attempts to mitigate the unavoidable downsides of Free Expression. While these downsides are serious and demanding attention of us and industry and you in congress , they do not outweigh the overwhelming good at 230 has enabled. Ilook forward your question. Your recognized for five minutes. Thank you for having me and having such a thoughtful badge. With me on the panel. When congress adapted section 230 20 years ago, the goal was to incentivize Tech Companies moderate content and although Congress Course one to the internet, what they could imagine at that time to be open and free, they also knew that openness would risk offensive material and im going to use their words. And so what they did was devise an incentive, a legal shield ordinance americans who are trying to clean up the internet. Both accounting for the failure so under filtering and over filtering of content. The purpose of the statute is clear but if interpretation, the words werent so what weve seen our courts massively overextending section 230. Two sites that are irresponsible in the extreme and thatproduce extraordinary harm. Weve seen the liability shield the applied to sites whose entire Business Model isabuse. Revenge porn operators and sites that all they do is cure users the fake sex videos, they get to enjoy betty immunity and interestingly, not only is it bad samaritans who have enjoyed illegal shields from responsibility but it also cites that have nothing to do with speech. That traffic in Dangerous Goods like arms list. Com. And the costs are significant. This overbroad interpretation allows bad samaritans sites reckless irresponsible sites to half costs on peoples lives and im going to take the case of Online Harassment because ive been studying the last 10 years. Costs are significant to women and minorities. Online harassment is often hosted on these sites is costly to peoples central Life Opportunities so when a group search of your name contains rate threats, your new photo without your consent, your home address as youve been docs and life defamation, its hard to get a job and its hard to keep a job and for victims, they are driven off line in the face of online assaults. There terrorized, there often change their names and they move. So in many respects, the calculus, the freespeech calculus is not necessarily a win for free speech as we are seeing diverse viewpoints and individuals being chased off line. So now the market i think ultimately is not going to solve this problem. So many of these businesses, they make money off of Online Advertising and salacious content that attracts eyeballs so the market itself i dont think we can rely on to solvethis problem. Of course, legal reform. The question is how should we do it. . We have to keep section 230, it has tremendous upside but we should return it to its original purpose which was to condition the shield on being a Good Samaritan, on engaging in what been witness and i have called reasonable contentmoderation practices. There are other ways to do it and in my testimony i sort of draw some solutions, but weve got to do something is doing nothing has costs. It says the victims of online abuse that their speech and their equality is less important than the business process of some of these sufferers. The chair recognizes doctor mcsherry for five minutes. As legal director for the Electronic Frontier foundation i want to thank the chairs, Ranking Members, numbers of the committee for the opportunity to share our thoughts with you today on this important topic. For nearly 30 years, efs has represented the interests of Technology Users both in court cases and broader policy debate to ensure that law and Technology Supports our Civil Liberties. Like everyone in this room, we are well aware online speech is not always pretty. Sometimes its extremely ugly and causes serious harm. We all want an internet where we are free to me , create, organize, share, debate and learn. We want to have control over our online experience and to feelempowered by the tools we use. We want our elections refrom manipulation and for women and marginalized communities to be able to speak openly about their experiences. Chipping away at the legal foundations of the internet in order to pressure forms to Better Police the internet is not the way to accomplish those goals. Section 230 made it possible for all kinds of voices to get their message out to the whole World Without having to acquire a broadcast license, owning a newspaper or learn how to code. The law has thereby helped remove much of the gatekeeping that once stifled social change and perpetuated power imbalances. And thats because it doesnt just protect tech giants. It protects regular people. You forwarded an email, a news article, a picture or piece of political criticism, youve done so with the production of section 230. If you maintain an online forum for a neighborhood group, youve done so with the production of section 230. If you use wikipedia to figure out where George Washington was born, youve benefited from section 230 and if you are viewing online videos, documenting events realtime in northern syria, youre benefiting from section 230. Intermediaries the social media platforms, new states or email forwarders are protected bysection 230 just for their benefits , they are protectedso they can be available to all of us. Theres another practical reason to resist the impulse to amend the law to pressure platforms to moderate user content. Simply put, theyre bad at it. Theres efs and many others have shown, they regularly take down all kinds of valuable content partly because its often difficult to rock your clear lines between lawful and unlawful speech scale. Those mistakes often silence the voices of already marginalized people. Moreover increased liability risk will inevitably lead to over censorship. Its a lot easier and cheaper to take something down and to pay lawyers to fight over it. Particularly if youre a smaller business or a nonprofit. Automation is not a magical solution context matters. Very often when youre talking about speech and robots, theyre pretty bad nuance. For example, in december 2018 blogging platform, announced a ban on content. In an attempt to explain the policy, they identified several types of content that would be unacceptableunder new rule. Shortly after tumblers on filtering technology flagged those images as unacceptable. The last reason. New legal burdens are likely to stifle competition. Facebook and google afford to throw millions that moderation, automation and litigation. But smaller competitors or wouldbe competitors dont have that kind of budget. So in essence we would have opened the door to a few companies and then land that door shut for everyone else. The free and open internet has never been fully free or open and the internet can amplify the worst of us as well as the best. But the internet still represents and embodies an extraordinary idea that anyone with a Computing Device and connect with the world, tell their stories, organize, educate and learn. Section 230 else make that idea a reality and its worth protecting. Thank you and i lookforward to your questions. Iq. Mister peters, you are recognized for five minutes. This thing with members of the subcommittee is an honor to be here today to discuss one of the premier security threats of our time, one that congress is wellpositioned to solve. I am the executive director of the alliance the counter crime online. Our team is made up of academics, security experts, ngos and citizen investigators who come together to eradicate Serious Organized Crime and terror activity. I want to thank you for your interest in our research and for asking me to join a panel of witnesses to testify area like me i hope to hear the testimony of the us trade representative because you think cda 230language out of americas trade agreements is critical to our National Security. I have a long history of tracking organized crime and terrorism. I was a war reporter and i wrote a book about the taliban and the drug trade that got me recruited by us military leaders to support our intelligence community. Ive met Terror Networks for special operations command, the eight and centcom. In 2014 i received the apartment funding wildlife supply chains and thats when my team discovered the largest Retail Markets for endangered species are actually located on social media platforms like facebook. Founding the alliance to counter crime online looks at crime more broadly than wildlife as taught me the incredible range and scale of Illicit Activity happening online. It is far worse than i ever imagined. We can and must get this under control. Under the original intent of cd 230, cda 230, there was supposed to be a shared responsibility between tech platforms, Law Enforcement and organizations like acco but tech firms are failing to uphold their end of the bargain by broad interpretations by the court they have safe harbor. Deflecting the stream they tried to convince you that most illegal activity is confined to the dark web thats not the case. Surface web platforms provide the same anonymity, Payment Systems and in much greater reach of people. Were talking illicit groups ranging from mexican drug cartels chinese triads that have webinars social media platforms. Im talking about us publicly let listed social media platforms to move a wide range of goods. Now were in the midst of a Public Health crisis, the Opioid Epidemic which is claiming the lives of 60 million americans each year but facebook a large Worlds Largest social Media Company only began tracking drug postings last year and within six months the Firm Identified 1. 5 million posts selling drugs. Thats whatthey admitted to removing. To put that in perspective thats 100 times more postings than the notorious dark web site the silk road ever carry. Study after study by acco members and others have shown widespread use of google, twitter, facebook, reddit, youtube to market and sell fentanyl, oxycodone and other addictive substances to us consumers in direct violation of us law, federal law in every major internet platform has a drug problem. Why wes and mark there is no law that holds tech firms responsible even when a child dies buying drums on internet platforms. Tech firms for an active role in spreading harm. Their algorithms originally designed wellintentioned to connect friends also help criminals and terror groups connect to a global audience. Isis and other terror groups use social media to recruit, andres and spread their propaganda. The acco Alliance Includes an Incredible Team of syrian archaeologists regarding the trafficking of thousands of artifacts plundered from sites and sold in many cases by isis reporters. It is a war crime. We are tracking groups on instagram, google and facebook wherein endangered species are sold. Items including rhino horn and elephant ivory to chimpanzees and cheetahs. The size of these markets is threatening species withins extension. I could continue to sit here and horrify you all morning. Illegal dogfighting, live videos of children being sexually abused, humanremains , counterfeit goods. Its all just a few clicks away. The Tech Industry routinely claims that modifying pda 230 is a threat tofreedom of speech. But pda 230 is a law about liability, not freedom of speech. Please try and imagine another industry in this country has ever enjoyed such an incredible city from congress, total immunity the matter what harm their product brings to consumers. Firms that have implemented internal controls to prevent Illicit Activity from occurring it was cheaper and easier to stare while looking the other way. They were given this incredible freedom and they have no one to blame but themselves for squandering it. He wants uniforms to the law district communities for hosting terror and crime content to regulate the firms must report crime and terror activity to Law Enforcement and appropriations to Law Enforcement to contend with this data. If its illegal in real life, ought to be illegal to hosted online it is imperative we reform cda 230 to make the internet as safer place for all. The gentle lady yields back. Ms. Oyama, you are recognized forfive minutes. Ranking member and rogers, distinguished members of the committee, thank you for the opportunity to appear before you today. I appreciate your leadership on these issues and welcome the opportunity to discuss both work in these areas. My name is katie oyama and im the head of it policy at google and i advised the company on Public Policy frameworks for the management and moderation of online content of all kinds area and at google our mission is to organize and make the world information universally accessible and useful. Our services and many others are positive forces for creativity, learning and access to information. Thiscreativity and innovation continues to yield Enormous Economic benefits to the United States. However, like all means of medication that came before it the internet has been used for both the best and worst of purposes this is why in addition to respecting local law we have robust policies, procedures and Community Guidelines that govern what activity is permissible on our platform and update their regularly meet the needs of our users in society and my testimony today will focus on three areas. The history up to 30 and how it has helped the internet grow, how to 30 contribute to our efforts to take down content and googles policies across our content. Section 230 has created a robust internet ecosystem where commerce, innovation and Free Expression dry while also enabling providers to take aggressive steps fight online abuse. Digital platforms millions of consumers and legitimate content across the internet, facilitating 29 trillion in online commerce each year. Addressing legal content is a shared responsibility and our ability to take action on content is underpinned by 230. Law not only clarifies when services can be held liable for thirdparty content. The Legal Certainty necessary for services takes with action against harmful content. Section 230 Good Samaritan permission was introduced to incentivize the selfmonitoring and facilitate content moderation. It does nothing to alter platform liability for violations of federal criminal laws are expressly exempted from the scope of the cda. Over the years the importance of section 230 has grown and is critical in ensuring economic growth. A study found over the next decade to 30 will contribute an additional 4. 25 million jobs and 440 billion in growth to the economy. Investors in the Startup Ecosystem have said weakening online safe harbor would have a recession impact on investment and internationally 30 is a differentiator for the us. China, russia and others a different approach to innovation and censoring speech online. Sometimes including speech that is critical of political leaders. Perhaps the best way to understand the importance is to understand what might happen if it werent employed. Without 230, search engines, political blogs, review sites of all kinds would either not be able to moderate content at all or they would over block, either way harming consumers and businesses that rely on their Services Every day. Without 230, platforms could be sued or decisions around the removal of content such as pc, mature content or videos related to appear and it seems and because of 230 we can and do enforce the policies that ensure that our platforms are safe, useful and vibrant. For each product we have a specific set of rules and guidelines that are suitable for the type of platform and the rest of harm. These are clear content policies and Community Guidelines lighting mechanisms to report content that violates them to increasingly effective Machine Learning that can facilitate removal of content at scale for a single human user has ever been able to access it. In threemonth period from april to june 2019 they moved over 9 million videos from a platform for violating our guidelines and 87 percent of this content was flagged by machines first rather than by humans and of those detected by machines 81 percent of that content was never viewed by a single user. We now have 10,000 people across google working on content moderation and have invested hundreds of millions of dollars for these efforts and in my written testimony i go into further detail about our policies and procedures or tackling content on search, google ads and youtube. We are committed to being responsible actors are part of the solution. Google will continue to invest in the people and technology to meet this ch test test test test test. Minutes. Members of both subcommittees, thank you for the opportunity to speak today. Technology as youve heard and internet have had a remarkable impact on our lives and society. Many educational entertaining and inspiring things have emerged from the past two decades and innovation at the same time many horrificthings have emerged. A massive proliferation of child sexual abuse, the radicalization of international terror. The distribution of deadly drugs, disinformation campaigns designed to so civil unrest and disrupt democratic elections. The proliferation of deadly conspiracy theories. The routine and deadly harassment of women and underrepresented groups in threats of Sexual Violence and revenge and nonconsensual pornography, small and large scale fraud and failures to protect our personal and sensitive data. How in 20 short years did we go from the promise of the internet to democratize access to knowledge and make the world more understanding and enlightened to this litany of corners. A combination of nacvetc, ideology, will growth at all costs have led the titans to fail to install safeguards on their services. The problem that they face today is not new. As early as 2003 was wellknown the internet was able or child predators. Despite Early Morning the Technology Sector drive their feet are the mid2000 and not respond to the known problems at the time, nor did they put in place the proper safeguards to contend with what should have been the anticipated problems we face today. In defense of the Technology Sector, they arecontending with an unprecedented amount of data. 500 hours of video uploaded to youtube every minute, from 1 billion daily uploads to facebook become 500 tweets per day, on the other hand these same companies have had over a decade to get their houses in order and have failed to do so and at the same time they have managed to profit handsomely by harnessing the scale and volume of the data thats uploaded to their Services Every day. These services dont seem to have trouble dealing with unwanted material serves their interest. They remove Copyright Infringement and effectively remove legal adult pornography because otherwise their services would be littered with pornography during away advertisers. During its 2000 congressional testimony Mister Zuckerberg and vote Artificial Intelligence as the savior for content moderation and we are told 5 to 10 years. Putting aside not clear what we should doin the intervening decade or so , this claim is almosthurt me overly optimistic. For example earlier this year facebooks chief Technology Officer showcased facebooks latest Ai Technology for discriminating images of broccoli from images of marijuana. Despite all the latest advances in ai and pattern recognition the system is only able to perform the tasks in ai and pattern recognition, this system is only able to perform a task of 90 . This means one in ten times the system is simply wrong. At a scale of uploads a day, the technology cannot possibly moderate content. This Discrimination Task is surely much easier than identifying a broad task of extremism and disinformation material. The promise of ai is just that a promise. We cannot wait a decade or more with the hope that ai olympic improve with orders of magnitude. To complicate things even more, earlier this year, mr. Zuckerberg announced facebook is preventing anyone, the government, facebook, from seeing the content of any communications. Ending encryption will make it more difficult to cop tend with a litany of abuses i enumerated. We can and must do better when it comes to contending with some of the most violent, harmful and dangerous content online. I simply reject the naysayers what tharg you it is difficult from a technological perspective or those that say reasonable and responsible content moderation will lead to the stifling of open exchange ideas. Thank you and i look forward to taking your questions. Thank you, dr. Freed. Well, weve concluded our option. Were going to move to member questions. Each member will have five minutes to ask questions of our witnesses. And i will start by recognizing myself for five minutes. I have to say when i said at the beginning of my remarks, this is a complex issue. Its a very complex issue. And i think weve all heard the problems. What we need to hear is solutions. Let me just start by asking all of you, just by a show of hands, who thinks that Online Platforms can do a better job of moderating their content on their websites . So thats unanimous. I agree, i think its important to note that we all recognize that content moderation online is lacking in a number of ways and that we all need to address this issue better. And if not, you who are the, you know, the platforms and the experts in this technology and you put that on our shoulders, you may see a law that you dont like very much and that has a lot of unintend consequencs e unintended consequences for the internet. I would say you need to have a job and an industry getting together and discussing better ways to do this. The idea that you can buy drugs online, and we cant stop that, to most americans hearing that, they dont understand why thats possible. Why it wouldnt be easy to identify people that are trying to sell illegal things online and take those sites down. Child abuse. Its very troubling. On the other hand, i dont think anybody on this panel is talking about eliminating section 230. So the question is, what is the solution between not eliminating 230 because of the effects that would have just on the whole internet and making sure that we do a better job of policing this. Mr. Huffman read it. Its you know a lot of people know have read it. Its really a relatively Small Company when you place it against some of the giants and you host many communities and you rely on your volunteers to moderate discussions. But i know that youve shut down a number of controversial subread its that have spread deep fakes, disturbing content and misinformation and dangerous conspiracy theories. What would reddit look mike if you were legally rely for your companys decision to moderate units . What reddit would be forces to go to one of two extremes. In one version, we would stop looking. We would go back to the pre 230 era, which means if we dont know, were not liable. And that im sure is not what you intend and certainly not what we want. It would be not aligned with our mission of bringing community and belonging to everybody in the world. The other extreme would be to remove any content or prohibit any conat any time that can be remotely problematic. Since reddit is a platform where 100 of our content is created by our users, it fundamentally undermines the way reddit works. Its hard to give you an honest answer what red did would look like, im not sure reddit as we know it would exist in a world where we have to remove every generated content. You talk about the speech if section 220 were substantially repealed or altered. But what other tools could congress use to incentivize platforms and encourage a healthier online ecosystem . What would your recommendation be short of eliminating 230 . Well, i think a number of the problems that weve talked about today so far, which i think everyone agrees are very, very serious. I want to underscore that are actually often addressed by existing laws that target the conduct, itself. So, for example, in the arms list case, we had a situation where what arms list, the selling of the gun that was so controversial was actually perfectly legal under wisconsin law. Similarly, many of the problems that weve talked about today are already addressed by federal criminal laws that already exist and so they arent, section 230 is not a barrier, because, of course, there is a card out for federal criminal laws. So i would urge this committee to look carefully at the laws that actually target the actual behavior that we are concerned about and perhaps start there. Miss peters. You did a good job horrifying us with your testimony. What decision do you offer short of repealing 230 . I dont propose repealing 230. I think that we want to continue to encourage innovation in this country. Its our poor economic drive of our economy. But i do believe that if a cda of 230 should be revised, so that if something is illegal in real life it is illegal to host it online, i dont think that is an unfair burden for tech firms remember certainly some of the wealthy firms should take that on. We have to run checks to make sure when we do business with foreigners, were not doing business on a terror black list. Is it so difficult for Companies Like google and red the it to make sure that theyre not hosting an illegal pharmacy . I see my time is getting way expired. But i thank you. I think we get the gist of your answer. The chairman now yields to my Ranking Member for five minutes. Well, thank you, mr. Chairman. Again, thanks to all witnesses. If i can start with you. A recent New York Times article outlined the horrendous nature of child sex abuse online and how it has exponentially grown over the last decade. My understanding is Tech Companies are only legally required to report images of child abuse only when they discover it. They are fought required to actively look for it. I understand you made voluntary efforts to look for this kind of content. How can we encourage platforms better enforce their terms of service or proactively use their store provided by subsection c, 2 of section 230, make good faith efforts to create accountability within platforms . Thank you for the question and particularly for focusing on the importance of section c2 do inventivize platforms to moderate content. Can i say for google we think transparency is critically important. So we publish our guidelines. We publish on youtube a Quarterly Report we show across the different categories of content what is the volume of content we have been removing. We also allow for users to appeal. So if their content is string and they think that was a mistake, they also have the ability to appeal and track what is happening with the appeal. So we do understand that this piece of transparency is really critical for user trust and decisions with policy makers on these critically important topics. Thank you. Miss tron, a number of claimants claimed section 230 immunity in the court. Some are tech platforms. Was section 230 intended to capture those platforms . I keep doing that. So platforms that are solely responsible for the content, there is no user the question is there is no user generative content and its their creating the content. Thats the question, would that be covered by the legal shield 230 . Im asking is that the question . Right. No, they would be responsible for the content that theyve created and developed, so question 230, that legal shield would not apply. Thank you, mr. Fareed, are there tools available like photo dna or copy write id to flag the steal of Illegal Drugs online . If the idea is that platforms should be incentivize to tack down blatant illegal content, shouldnt key word or indicators associated with opioids be searchable through an automated process . The short answer is yes. Theres two ways of doing content moderation. Once the material has been identified, particularly by a human moderator. Whether its child abuse, Illegal Drugs, terrorismrelated material, whatever it is, that material copy write infringement can be fingerprinted, digitally fingerprinted and stopped from future upload and distribution that technology has been well understood and has been deployed over a decade. I think it has been deployed anemically across the platforms and not nearly aggressive enough. Thats one content is networks today. The second form of moderation i call the day zero. Finding the Christ Church video on upload. That is incredibly difficult and requires journalists or Law Enforcement to find. Once that content is identified, it can be removed from future upload. I will point out today you can go on to google and you can type find fentanyl online. It will show you illegal pharmacies where you can find fentanyl. Thats not a difficult find. Were not darking the dark web or things on page 20. Its on the first page. That in my opinion there is no excuse for that. Let me follow up. You say its anemic with the platforms are doing out there. Last year in this room, we passed over 60 pieces of legislation dealing with the drug crisis that we have in this country. Fentanyl being one of them. You just mentioned you can just type in fentanyl and you can find it. Okay. Because again what were trying to do is make sure we dont have the 72,000 deaths we had in this country over a year ago and with over 43,000 being associated with fentanyl. So, okay. How do we go in to the platform and say weve got to enforce this, because we dont want this stuff flowing in from china and how do we do that . Well, this is what the conversation is. So im with everybody else on the panel. We dont repeal 230. But we make it a responsibility, not a right. If your platform can be weaponized in the way we have seen, across the boards from the litany of things i had in my opening remarks, surely something is not working. If i can find on google in page 1. And not just me. My colleague on the table, also investigative journalists. We know this content is there. Its not hiding. Its not difficult. We have to ask a question, if a reasonable person can find this content, surely goog well its resources can find it as well. Now, what is the responsibility . I think you said earlier, too, is that you should enforce your terms of service. If we dont want to talk about 230. Lets talk about terms of service. Terms of service of most of the major platforms are really good. Its that they dont do very much to enforce them in a clear and consistent and transparent way. Thank you very much. Mr. Chairman, my time is expired. I yield back. The chair now recognizes the chair for the subcommittee on Consumer Protection for five minutes. Thank you, mr. Chairman. Ms. Oyama, you said in one of the sentences that you presented to us that without 230, i want to see if there is any hands that would go up that we should abandon 230 . Has anybody said that . Okay. So this is not the issue. This is a sensible conversation about how to make it better. Mr. Huffman, you said and i want to thank you for, we had i think a really protective meeting yesterday explaining to me what your organization does and how its unique and you also said if your testimony that section 230 is a yunique american law, yeah when we talked yesterday you thought it was a good idea to put it in a trade agreement dealing with mexico and canada. If its a unique american law, let me just say that i think trying to fit it into the regulatory structure at this time is inappropriate and i would like to just quote. I dont know if hes here from a letter that chairman owe ranking walden wrote some time ago to mr. Lighthizer that said we find it inappropriate for the United States to export language mirroring section 230 while such serious policy discussions are ongoing. Thats whats happening right now. We are having a serious policy discussion. But i think what the chairman was trying to do and what i want to try and do is figure out what do we really want to do to amend or change in some way . So again, briefly, if the three of you that have talked about the need for changes let me start with miss citrone on what you want to see in 230 . So id like to bring the statute back to its original purpose was to apply a Good Samaritan who were engaged in responsible and reasonable moderation practices. We can change. I have the language to clang the statute, that would condition that were not going to treat a user or provider of interactive service. That engages in reasonable content moderation practices as a publisher or a speaker. So it would keep the immunity. Let me just suggest, if there is lange, i think wed like to see suggestions. Peters, if you could, i think you pretty much scared us as to what is happening a and then how we can make 230 responsive to those concerns . Thank you for your question. We would love to share some proposed language with you about how to reform 230 to protect better against organized crime and terror activity on platforms. One of the things im concerned about that a lot of tech firms are involved in is when they detect ill less it activity or it gets flagged to them by users, their response is to delete it and forget about it. What im concerned about is that two things, number 1, that essentially is destroying Critical Evidence of a crime. Its actually helping criminals to cover their tracks as opposed to a situation like what we have for the financial industry and even asuspects of the tran sport industry. If they know Illicit Activity is going on, they have to share wit Law Enforcement and in a certain time frame. I certainly want to see the content removed, but i dont want to see it simply deleted and i think that the an important distinction. I would like to see a worked where the big tech firms work collaboratively with Civil Society and with Law Enforcement to root out some of these evil im going to cut you off just because my time is running out and i do want to get to dr. Fareed. With the same thing. So i would welcome concrete suggestion. Thank you. I agree with my colleague mrs. Citron. I think 230 should be a privilege not a right. You have to show are you doing reasonable content moderation. I think we should be worried about the small startups. If we start regulation now, the ecosystem will become mormon open liftic. We need to owe mor more monopolyistic. The last thing i would say is the rule versus to be clear, consist and transparent. Thank you. I yield out. The chair now recognizes ms. More res rogers for five minutes. Thank you, mr. Chairman. Section 230 was intended to provide Online Platforms with a shield from liability as well as a sword to make good faith efforts to filter, block or otherwise address certain offensive content online. Professor citron, bubl companies are using do you believe companies are using the sword and . Not why that is . We have been working with facebook and ticket for about eight years. So i would say the dominant platform and folks on this panel at this point are engaging in what i would describe at a broad level as fairly reasonable content moderation practices. Id say they could to far better on transparency, about what they mane by when they forbid hate speech. What is the harm they want to avoid . Examples. And they could be more transparent about the processes that they use when they make decisions. Right. To have more accountability. But what really worries me are the sort of renegade sites as well. The ones whose insightment with no moderation. Dating apps that have no ability to ban i. P. Addresses and, frankly, sometimes, its the biggest of providers, not the small ones, who know they have a legality happening on their platforms and do nothing about it. Why are they doing that . Because of section 230 immunity. So the dating app grinds comes to mind. Hosting impersonations of someones acts and the person was using grinder to send thousands of many ento this mans home. Grinder heard 50 times from the individual being targeted, did nothing about it. Finally within they responded after getting a lawsuit, their response is our technology doesnt allow us to track ip address, but grinder is fairly dominant in this space. But when the person went to scruff, a smaller dating site, the impersonator was poseing as the individual, sending them to his home and scruff responded right away. They said we can ban the ip address and took care of it. So i think the notion that the smaller versus large by my life is there is good practices, responsible practices and irresponsible harmful practices. Okay. Thank you for that. Mr. Huffman and ms. Oyama, your Company Policies specifically prohibit illegal content or activities on your platforms. Regarding your terms of services, how do you monitor content on your platform to ep sure that it does not violate your policies . Maybe ill start with mr. Huffman. Sure. In my Opening Statement, i described the three layers of moderation that we have on reddit. Our companys moderation and our team. This is the group that both writes the policies and enforces the policies. Primarily the way they work is enforcing these policies at scale, looking for aberrational behavior, known and problematic sites or words. We participate in the crossindustry hash sharing, which allows us to find images, for example, exploitive of children that are sheared industry wide or fingerprints thereof. Next to our community moderators, these are the people who, these are users and then following up the users, themself, those two groups participate together in removing content thats inappropriate for their community and in violation of our policies. We have policies against hosting. Our policy is not very long. But one of the points is no illegal content. No regulated goods. No drugs, no guns. Anything of that sort. You are seeking it out and if you find it, you get it off the platform . Thats right. Because 230 doesnt allow us criminal Liability Protection. So we are not in the business of committing crimes or helping people commit crimes. That would be problematic for our business. So we do our best to make sure its not on our platform. Ms. Oyama would you address and that what you are doing if you find that illegal content. Yes, we have very clear content policies. We publish those online. We have youtube examples that give more examples and specific ways so people understand. We are able to detect of the 9 million videos that are removed from youtube in the last quarter, 87 of those were detected first by machine. So automation is one very important way. And then is second way is human reviewers. We have Community Flagging, where any user can flag it and follow up what happens with that complaint. We have human reviewers that look and were very transparent in explaining that. When it comes to criminal activity on the internet, you know cda 230 has a complete carveout. In the case of grinder, we have policies against harassment. But in the case of grinder where there was real criminal activ y activity, my understanding is there was a defendant in that case and a criminal case for harassment and stockialking aga him. Opioids under criminal law there is a section that says i think you know controlled substances on the internet, sale of controlled substances on the internest. Thats provision n. Cases like that where there is a Law Enforcement rule, we would if there is a correct Legal Process we would work with Law Enforcement to also provide information under due process or a subpoena. Thank you. Okay. My time expieshd. Thank you. Thank you so much, mr. Chairman. I really want to thank this panel. Im a former constitutional lawyer. So im always interested in the intersection between criminality and free speech and, in particular, professor citron, i was reading your written testimony, which you confirmed with miss shakowski how section 230 should be revised. Both continue to provide First Amendment protections, but also return the statute to its original purpose, which is to let Companies Act more responsibly, not less. In that vain, i want to talk during my line of questioning about Online Harassment, because this is a real Sexual Harassment. This is a real issue that has just only increased Antidefamation League reported that 24 of women and 63 of lbgtq individuals have speernszed harassment experienced harassment because of their sexual orientation. This is compared to only 14 of men and 37 of all americans of many background have been experiencing severe Online Harassment, which include Sexual Harassment, stalking, physical threats and sustained harassment. So i want to ask you, professor,als i want to ask you miss peters briefly to talk to me about how section 230 facilitates illegal activities and do you think it undermines the value of those laws and, if so, how . Professor citron. So met me say in cases involving harassment, of course, there is a perpetrator and the platform that enables it . Right. And most of the time the perpetrators are not pursued by Law Enforcement. So crimes in cyber space, i explore the fact that Law Enforcement they dont get it, they dont understand the abuse. They dont know how to investigate it. In the case of grinder, police there were like ten protective orders violated in Law Enforcement in new york has done nothing about it. So its not true we can always find the perpetrator nor especially in the cases of stalking, harassment and threats. You see a severe under enforcement of law particularly when it comes to gendered harms. And thats really where calls are trying to protect. Miss peters, do you want to comment . I want to say in this issue, there seems to be something akin to a cyber restraining order so if somebody is stalking somebody on grinder or okay cupid or google, that site can be ordered to you know block that person from communicating with the other. Okay. And even under section 230 immunity, can platforms ignore requests to take down this type of material . They have. Professor citron, are you nodding your head. They do and they can, especially if those protective orders are coming from state criminal law. Okay. I wanted to ask you, dr. Mcsherry, Sexual Harassment continues to be a sophisticate problem on twitter and other social platforms and i know section 230 is a critical tool that facilitates content moderation. As weve heard in the testimony. A lot of the platforms arent being aggressive enough to enforce the terms and conditions. So what i want to ask you is what can we do to encourage platforms to be more aggressive in protecting consumers and addressing issues like harassment . I imagine this hearing will encourage many of them to do just that. Thank you. We keep having hearings all the absolutely. I understand that. So i actually think many, many of the platforms are pretty aggressive already. I agree with what many have said here today. Which is that it would be nice if they would start by clearly enforcing their actual terms of service, which we share a concern about. Often theyre enforced very inconsistently, that is very challenging for users. A concern that i have is if we institute what i think is one proposal which is that whenever you get a notice, you have a duty to investigate. That could backfire. One of the things that also happens if you want to silence people. They end up being the ones who are silenced rather than the other way around. What is your view of that . Pardon me . What is your view . There is two issues at hand. When you do moderation, you risk over moderating or under moderating. What i would argue is we are way, way under moderating. I look where we foul dawn and make mistakes and take down content we shouldnt and i weigh that against content last year to making child abuse material and terrorism and drugs. The weights are imbalanced. We have to sort of rebalance and get it right. We are going to make mistavenlths were making way more mistakes on allowing content than not allowing content. The chair now recognizes mr. Johnson for five minutes. Thank you, mr. Chairman. And to you and to chairman chakowske. I have been in Information Technology for most of my adult life and social responsibility has been an issue without i have talked about a lot. In the absence of heavyhanded government and rec lath thats i think the absence of regulations is what has allowed the internet and the social media platforms to grow like they have. But i hate to sound clichish, but that old line from the Jurassic Park movie. Sometimes were more focused on what we can do and we dont think about what we should do. And so i think thats where we find ourselves with some of this weve heard from some of our witnesses, accessibility of a noble audience through internet platforms as being used for illegal and and illicit purposes by terrorist organization and the sale of opioids, which continues to severely Impact Communities across our nation particularly in rural areas like i live in eastern and southeastern ohio. However, internet platforms also provide an essential too many for legitimate communications and the free, safe and open exchange of ideas, which had become a vital component of modern society in todays global economy. I appreciate hearing from all of our witnesses as our subcommittees examine whether section 230 of the Communications Decency act is empowering platforms to effectively self regulate under this light touch framework. So, mr. Huffman. In your testimony, you discussed the ability of not only red did employees but its users to self regulate and remove content that goes against reddits rules and community standard, do you think other platforms, for example, facebook or youtube have been able to successfully implement similar self regulating functions and influence . If not, what makes reddit unique in their ability to self regulate . Sure. Thank you, congressman. I am only a familiar with the other platforms to the extent that you probably are. Which is to say im not an expert. I do know nair not sitting on their hands. I know theyre making progress. But reddits model is unique in the industry in that we believe that the only thing that stals i scales with u scales with use users. Sharing this some of this burden with those people in the same way that in our society here in the occupation, there are many up written rules about what is accessible or not to say. The same thing exists on our platforms and by allowing and empowering our users and communities to enforce these unwritten rules, it creates a more unhealthy ecosystem. Miss oyama, in yourcom, you discuss the content allowed on your platforms, including balances respect for plamplgs and giving a platform for marginalized voices. Would a system like reddits up votes and down votes impact platforms like youtube and do dislikes on youtube impact a videos visibility . Thank you for the question. As youve seen users can do a thumbs up or thumbs down. Its a many signals. It won be determine native of a youtube mostly used for relevance and i really appreciate your content moderation. I want to make the point on the piece about harassment and bullying, we did remove 35,000 videos from youtube just in the last quarter. We can do this because of cta 230. Whenever someones content is removed, they may be upset. So there could be cases for defamation, breach of contract and Service Providers large and small are able to implement these procedures and identify bad content and take it down because of the provisions of cta 230. Okay. Well, ive got some other questions im going to submit for the record, mr. Chairman. Let me summarize with this i want to stay within my time and you are going to require me to say in my time. So. You know in the absence of regulations as i mention in my opening remarks, that takes social responsibility to a much higher bar. And i would suggest to the entire industry of the internet, social media platforms. We better get serious about this self regulating or you will force congress to do something that you might not want to have done. With that i yield back. Gentleman yields back. The chair recognizes miss matsui for might have finance. Thank you. Miss oyama and miss huffman, last week the senate instill Committee Reports russias use of social media. The report find russia used platforms to show social discord and influence outcome in the 2016 election what role can section 230 play in ensuring platforms are not used again to disrupt our political process . Miss oyama, mr. Huffman, comments. Thank you, cta is critically important to allow People Like Us to protect users in an election. Its a critical issue, especially with the election cycle coming up. We found on google across our systems in the 2016 election fortunately due to the measures we have been able to take and ad removal, there were only two accounts that had infiltrated our systems. They had a spend of less than 5,000 back in 2016. We continue to be extremely vigilant so we do publish a political ads transparency report. We require ads are disclosed, who paid for them. They show up in the library. They need to be from so you feel that you are effective in we can always do more, but on this issue, we are extremely focussed in working with campaigns. Mr. Huffman. Yes, congresswoman, so in 2016, we found that we saw the same fake news submitted to our platform as we saw on the others. The difference on reddit, it was largely rejected by the community, the users, long before it came to our attention. One thing reddit is good at and the community is skeptical and rejecting. Its our questioning everything. Were better for worse. Between that and now we have become dramatic ally better at finding groups of accounts that are working in a coordinating or authentic matter and we collaborate with Law Enforcement. So based on everything we have learned in the past gentleman forward, with rein a pretty good position coming into the 2020 election. Dr. Fareed if your testimony you talked about misinformation campaigns designed to distract democratic elections. This election interference troubles me and a lot of other people. You mentioned there is more platforms could be doing about moderating content online. What more should they be doing about this issue now . This time . Let me give you one example. A few months a go we saw a fake Speaker Pelosi make the round. Facebook said we know its fake. Were leaving it up. We are not in the business of telling the truth so that was not a tech no logical problem. That was a policy problem. That was not satire. It was not comedy. It was meant to discredit the speaker. So i think fundamentally we have to relook at the rules f. You look at facebooks rules, it says you cannot post things that are misleading or fraudulent. That was a clear Case Technology worked. The policy is unambiguous and they simply failed the policy. They thaild failed, okay. To youtubes credit they took it down to youtubes credit they didnt respond to the issue n. Some cases there was a technological issue. More often than not we are not enforcing the rules in place. So thats a decision they made to not enforce the rules. Okay. Miss oyama, what do you think about that what mr. Fareed just said . Sure. Our response. There are two asuspects of this. First, specifically towards reddits, we have policy against impersonation. Okay. So a video at that can both be used to manipulate people or service misinformation. Its also raises questions about that r veracity of things we see and hear and prompts important discussions. So the context whether around video like that stays up or down on reddit is very important. And those are difficult decisions. I will observe that we are entering into a new era. Right. Where we can manipulate videos. Woo weve historically been able to manipulate text and images with photo shop. Now videos. So i do think not only do the platforms have a responsible. But we as a society have to understand that the source of materials, for example, which publication is critically important. Because there will come a time. No matter what any of my tech peers say, where we will not be able to detect that faker why i. Exactly. Miss oyama, you only have 15 seconds. On the specific content we have a policy against practices pand removed it. But there is ongoing work that needs to be done to be able to identify deep fakes. Of course, even comediennes sometimes use them. In political context or other places that could severely undermine democracy. Weve opened up data sets. We are working with researchers to better detect when media is manipulated. I appreciate the comment. I have a lot more to say. You know how this is. Anyway i will yield back my thiem. Chair recognizes mr. Kensi mr. Kensinger for five minutes. Thank you all for being here today. We appreciate it. On the last line of questions one of the best things about democracy is our ability to have free speech and share opinions, this can be something thats a real threat. So i thank the chairman for yielding. I think its safe to say not every member of congress has a plan for what to do about section 230 of the Communications Decency act. I think we all agree that the hearing is warranted. We need to have a discussion about the origins and intent of that section and whether the companies that enjoy these Liability Protection operate in a manner intended. Ill state up front. I generally appreciate the effort certain platforms have made to remove and block up lawful content. Id also say its clearly not enough and thats the status quo is unacceptable. Its been frustrating for me in recent years that my image and variations of my name have been used and this goes back ten years and literally can approach in the 50s to 100s, given on the ones we just know about. These scams are increasingly pervasive. I not only brought it up in the hearing with Mark Zuckerberg last year. I also wrote him this summer to continue to press him to act more boldly to protect his users. So i have a question, sources indicate that in 2018, people reported hundreds of millions of dollars lost to online scammers, including 143 million through romance scams. Given what so many people have gone through, its become more and more important for platforms to verify user authenticity. So both to mr. Huffman and miss oyama, what do your platforms do to verify the you a ten then tissie of use authenticity of User Accounts . Two parts to my answer. The first on the scams, themselves. My understanding is you are targeting veterans in particular. We have a number of veterans communities on reddits. Around support and shared experiences, they all like all of our communities create nethe own rules. These community prohibit fundraising generally. The community and members of those communities know they can be targeted by this sort of scam in particular. So thats a nuance we think is important and highlights the power of our community modem. Because i as a nonveteran might not have had that same sort of intuition. Now, in terms of what we know about our users, reddit is not, we are different from our peers in that we dont require people to share their real world identity with us. We do know where they register from, what ips they use, maybe their email address. We dont force them to reveal their full name or gender and this is important. Because on reddit there are communities that discuss sensitive topics. In those same veteran communities or for example drug addiction communities or communities for parents struggling to be new parents. These are not kids that go on to a platform like facebook, for example, say, hey, i dont like my kid. I understand. I dont mean to cut you off. I want to go to miss oyama. Sure. Im very sorry to hear that that happened to you, congressman. On youtube we have a policy against impersonations. If you ever see a channel impersonating you or a use you are saw that i think they uploader that government i. D. But that would result in the channel being struck on search and it can show up across the web searches, an index of the web, were trying to give relevant information to our users every single day on search. We suppress 19 billion links that are spam that could be scammed to defend the users and then on ads, we have something called a risk engine that can kick out ads or fraudulent accounts. Thank you. You know, its not, im not upset about the sites that are like kensinger is the Worst Congress bhan ever. Thats understandable i guess for some people. But when you have again in my case as an example and multiple cases flew from india, using her life savings, she thought we were dating for a year and not to mention the money she gave to this perpetrator. One of the stories, one of the important things is people need to be aware of that if you have somebody over a period of a year dating you and never authenticating that its probably not real. Miss peters, what are the risc of people not trusting other users identities online . I think there are multiple risks of that but i want to come back to the key issue for us, which is if its illicit, the sites should be required to hand over data to Law Enforcement to work proactively with Law Enforcement. Weve heard a lot today from a gentleman from reddit about their efforts to better moderate. Some of our members were able to go online just the other day, type in a search for buy fentanyl online and came one many, many results. Buy aderol online. Those are fairly simple search terms. Im not talking a high bar to get rid of that on your platform doesnt seem too hard or to have that automatically direct to a site that would advise you to get counseling for drug abuse. We are not trying to be the thought police. Were trying to protect people from organized crime and terror activity. Thank you. And ill yield back. But i have a bunch more questions ill submit. Thank you. This gentleman yields back. For the record, i want to say i dont think the gentleman is the worst member of congress. I dont even think are you at the very bottom adam. Are you not a bad guy. The chair recognizes miss castor for five minutes. Thank you, chairman doyle for organizing this hearing. Thanks to all of our witnesses for being here today. Id like to thank you about the issue of 230 in the context of this horrendous tragedy in wisconsin a few years ago. On armslist. Com, where a man walked into a salon where his wife was working and shot her dead in front of their daughter and killed two others in that salon and then killed himself and this is ap the type of horrific tragedy thats all too common in america today. But doctor i think you misspoke a bit you said that was all legal. But it wasnt because two days before the shooting, there was a temporary restraining order issued against that man. He went Online Shopping at armslist. Com. Two day afc that, tro was issued. And the next day he commenced his murder spree. And what happened is, you know, arm list knows that they have domestic abusers shopping. Theyve got felons. Theyve got terrorists shopping for firearms. Yet theyre allowed to proceed with this. Earlier this year, the wisconsin Supreme Court ruled that armslist is immune even though they know they are perpetuating illegal content in these kind of tragedies. They said, the wisconsin Supreme Court ruled arms list is immune because of section 230. They basically said it did not matter that rememberslist actually knew or even intend that its website would facilitate illegal firearm sales to dangerous persons. Section 230 still granted immunity and then miss peters, youve highlighted. This is not an isolated instance. Were talking child sexual abuse content. Illegal drug sales. Its gone way too far. So i appreciate that you all have proposed some solutions for this. Dr. Citron, youve highlighted a safe harbor that if Companies Use their best efforts to moderate content, they would have some protection, but how would this work in reality . Would this be then its left up to the courts and those type of liability lawsuits, which kind of speaks to the need for very clear standards coming out of the congress i think . So, yes, it would. And thank you so much for the question. How would we do this . It would be in the court. It would be an initial motion to dismiss. The company would then, whoever is being sued, the question would be are you being reasonable on your content moderation at large. Not with regard to any one piece of content or activity. And its try you that it would then be a force and mechanism, a six motion in federal court. Have you companies that explain what constitutes reasonableness. I think we can come up with all of us, come up with some basic sort of threshold that we think is reasonable content moderation practices, what we might describe aztec no logical due process, clarity of what it is having a process, having clarity about what it is you prohibit. But its going to have to be case by case, context by context. Because whats reasonable response to a deep fake and i have done a considerable amount of work on deep fakes is going to be different from the kind of advice i would give to facebook, twitter and others and how one constitutes a threat and how we figure it out. Im thinking about dr. Fareeds testimony about what we do about their certain. It would be in the Public Interest if it iscontent, they dont wouldnt wind up as an issue of a fact in a lawsuit. What do you think, dr. Fareed . If it is illegal content online, there really shouldnt be a debatable question. Right . Im not a lawyer, to be clear. Im a mathematician, but i completely agree with you. In some cases we have seen over the years and we saw this when we were deploying photo dna is the Technology Companies wanted to get you muddled up in the gray area. So we had conversations when we were trying to remove child abuse material saying what happens when its an 18yearold, when its not sexually explicit. Yes, those are complicated questions but theres clear cut behavior. Im going to interrupt you because my time is short. Theres also an issue with the number of moderators who are being hired to go through this. The publication had a horrendous story of facebook moderators. It caught my attention because one of the place is in tampa, florida, my district. Im going to submit followup questions about a moderators and some standards for that practice as followup. I encourage you to answer them and send it back. Thank you. The gentlelady yields. And now the chair recognizes the gentleman from illinois for five minutes. Thank you, mr. Chairman. Its great to be with you. Im sorry i missed a lot of this because im upstairs. But in my 23 years of being a member, ive never had the chance to really address the same question tho two different panels on the same day. So it was kind of an interesting convergence. Upstairs we were talking about evaping and underage use and whats in the product. So i was curious when we were in the Opening Statements here, someone and i apologize who, mentioned two cases. One was dismissed because they really did nothing. And one the one who tried to be the good actor got slammed. I dont know about slammed, but i see a couple heads being can you miss citron, could you address that. Those are the two cases that effectively give rise to section 230. So what animates chris cox to go to ron wyden saying weve got to do something about this, its a pair of decisions. In which one basically says if you do nothing, youre not going to be punished for it. But if you try and you moderate, that heightens your responsibility. So no good deed goes unpunished. Yes. Thats why were here today in many respects. If i tie this into whats going on upstairs and someone uses a platform to encourage underage vaping with unknown nicotine content and the site then decides to clean it up because of the way the law is written right now, this good deed which we most would agree probably is a good deed would go punished. No, no. Now we have section 230. Thats why we have section 230. They are encouraged just so long as theyre doing it in good faith under section 230 c2, they can remove it and they are Good Samaritans. Right. But so okay. So that so that is the benefit of it. Is there fear okay. So in this debate we heard earlier in opening comments from some of my colleagues in the usmca debate that part of that would remove the protections of 230 and then we would fall back to a regime by which the good deed person could get punished. Is that correct . Everybody is kind of shaking their head mostly. Miss peters, youre not. Go ahead. Just turn your mic on. We need to keep the 230 language out of the trade agreements. Its currently an issue of great debate here in the United States. Its not fair to put that in a trade agreement. It would make it harder for dont get me wrong. I want usmca passed as soon as possible without any encumbered word that doesnt happen. Im not a proponent of trying to delay this process. But im just trying to work through the this debate also in good conscious. I made a concern upstairs, we believe in legal products that have been approved by the fda and we are concerned about a black market operation that would then use platforms to sell to underage kids. That that would be how i would tie these two hearings together which, again, i still think is pretty interesting. When i when we had the facebook hearing a couple years ago, i referred to a book called the future computer which talks about the ability of industry to set those standards. I do think that we do this across the board in a lot whether its engineering of heating and air cooling equipment, or that we do have industry that comes together for the good of the whole, the good actors, and say here are our standards. And the fear is that if this sector doesnt do that, then the heavy hand of government will do it. Which i think would really cause a little bit more problems. Dr. Fareed, youre shaking your head. We have been saying to the industry, you have to do better. If you dont, somebodys going to do it for you. You do it on your terms or somebody elses. Do it on your terms. Were not the experts. Part of that book talks about transparency, accountability. I would encourage the industry and those who are listening to help us move in that direction on their own before we do it for them. And with that, mr. Chairman, i yield back my time. Gentleman yields and the chair recognizes the chair for five minutes. I would like to i mean, its very interesting testimony and jarring in some ways. Miss peters, your testimony is particularly jarring. Have you seen any authentic offers of weapon of mass destruction being for sale online . I have not personally, but we certainly have members of our alliance that are tracking weapons activity. And i think whats more concerning to me in a way is the number of illegal groups from hezbollah designated hezbollah groups to al qaeda that maintain pages and link to their facebook and twitter pages and run Fund Raising Campaigns a you have of them. Im just interested in the weapon of mass destruction issue. There are many that allow for secret groups. Those groups are the epicenter of Illicit Activity. Its hard for us to get inside those. Weve run Undercover Missions to get inside them thank you. You talk about the Tech Companies between the motivation and max miessed amount of time online on their platforms on the one hand. On the other hand, content moderation. Would you talk about that briefly, please . Weve been talking a lot about 230. But there is another tension point here. The underlying Business Model of Silicon Valley today is not to sell a product. You are the product. And in some ways, thats where a lot of the tension is coming from. Because the metrics we use at tha these companies are how many users and how long they stay on the platform. You can see why that is fundamentally in tension with removing users, removing content. So me Business Model is also an issue. The way we deal with user privacy is also an issue here. If the Business Model is monetizing your data, well then i need to feed you information. There is a reason why we call it the rabbit hole effect on youtube. There is a reason why if you start watching certain types of videos of children or conspiracies or extremism, you are fed more and more of that content down the rabbit hole. There is real tension there. And its the bottom line. Its not just ideological. We are talking about the underlying profits. Mr. Young, would you like to add to that . Thank you. I think many of these issues were discussing today whether its harassment, extremism, it is important to remember the positive and product ive potential for the internet. On youtube, weve seen it gets better. Weve seen countermessaging. Weve seen things making content for youth to counterextremist measures. Its great to remember that 230 has been made out of this. Its relevant to Foreign Policy as well. We would support its inclusion in uscma or other digital framework. Its responsible for the 172 billion surplus the United States has in digital services. Its critically important for Small Businesses to be able to moderate content and to prevent censorship from more repressive regimes abroad. Its a great issue. Its kind of hard to restrain yourself to brief answers, i understand that. But clearly companies could be doing more today within the current Legal Framework to address problematic content. Id like to ask each of you very briefly what you think can be done today with todays tools to monitor content. Briefly please. Sure. So for us the biggest challenge is evolving our policies to meet new challenges. But as such, weve evolved our policies a dozen times in the last couple years. And well continue to do so into the future. For example, two recent ones for us were expanding the harassment policy. Undou undoubtedly there will be new challenges in the future. Being able to stay nimble and address them is important. 230 gives us the space to adapt to these challenges. Miss citron. I would say the nimbleness of reasonableness. So enables ensuring that we do respond to changing threats. The threats landscape is going to change. We cant have a checklist right now. But i would encourage companies to not only have policies, but be clear about them and to be accountable. Just quickly, the issue for me with the reasonableness standard is as a litigator, thats terrifying. That means the practical matter, a lot of litigation risk as courts try to figure out what counts as reasonable. To your question, one of the crucial things we need if we want better moderation practices and we want users not to be treated just as products is to incentivize alternative Business Models. We need to make sure we clear a space so theres competition so then when a given site is behaving badly such as grinder, people have other places to go with other practices. Theyre encouraged to other sites are encouraged to develop and evolve. That would make Market Forces can sometimes work. We need to let them work. Im going to have to cut off my time and im going to yield to the gentlelady from indiana. Five minutes. Thank you, mr. Chairman. Thank you so much for this very important hearing. Dr. Fareed, actually to set the record and the reason im asking these questions, im a former u. S. Attorney. I was very involved in the internet crimes against Children Task force. We did a lot of work from 2001 to 2007. And you were right, mr. Huff man, deep fake pornography was not a term at that time. So we certainly know that, you know, Law Enforcement has been challenged for now decades in dealing with pornography over the internet. And yet, i believe that we have to continue to do more to protect children and protect kids all around the globe. A concept or tool, photo dna was developed a long time ago to detect criminal online child pornograp pornography. Yet it Means Nothing if the platforms dont do anything about it. So now weve been dealing with this now for decades. This is not new. And we now have new tools, right . So photo dna, is it a matter of tools or effort . Or how is it that its still happening . Dr. Fareed . This is a source of incredible frustration. I was part of the team that developed photo dna in 2008 with American History tv. F microsoft. There has been no tools in the last decade that have gone beyond photo dna. That is pathetic. That is truly pathetic when talking about this kind of material. How does an industry that prides itself on innovation say were going to use 10yearold material. This is not a technological limitation. This is we are simply not putting the effort into developing and deploying the tools. And let me just share that having watched some of these videos, it is something you never want to see and you cannot get out of your mind. I agree. And so im curious. Ms. Oyama, youve wanted to respond. How is it that we are still at this place . Thank you for the question. At google, that is not true at all. We have never stopped working on prioritizing this. We can always do better, but were constantly adopting new technologies. We initiated one of the first ones. Which enabled us to create digital fingerprints of this imagery, prevent it from being uploaded on youtube. We also share it. Theres a new tool we have with a content safety api. Its a speed at which the content is able to identify. So its going to continue to be a priority. I just wanted to be clear that from the top of our country, we need to be a safe secure place for parents and children. We will not stop working on this issue. Im very pleased to hear there have been advance then and that youre sharing them. That is critically important. However, i will say that an Indiana State Police captain chuck cohen who has actually testified before energy and commerce recently told me one of the issues Law Enforcement runs into when working with Internet Companies is an attitude he calls minimal ly compliant. And he said that Internet Companies will frequently not preserve content that can be used for investigation if Law Enforcement makes the companies aware of the concerning materials or automatically flags that content to Law Enforcement for review without actually checking if its truly objectionable or not. Do any of you have thoughts on his comment . He has been an expert do any of you have thoughts on how we balance this Law Enforcements critical need . Because they are saving children all around the globe. Ms. Peters, without restricting Companies Immunity from hosting concerning content . I just feel like if Companies Start getting fined or some sort of punitive damage every time theres illicit content, were going to see a lot less illicit content. If its illegal in real life, it should be illegal to host it online. Thats a very simple approach. That i think we could apply industrywide. And so i have a question particularly because i asked Mark Zuckerberg this relative to terrorism and recruitment and isis. Now we need to be even more concerned about isis. And i understand you have teams of people who take it down. How many people are on your team, mr. Huffman . Dedicated to removing content. Removing content at scale and writing policy, about 20 of our company. About a hundred people. Ms. Oyama, how many people . More than 10,000 people on content. That are involved in the content moderation, the human but how many people are on the team that actually do that work . Im happy to get back to you. Thank you. With that i yield back. Thank you. The gentlelady yields. At this point id like to introduce a letter for the record without objection. So ordered. Next the chair recognizes the gentlewoman from new york for five minutes. I thank our chairman and our Ranking Members for convening this joint subcommittee hearing today on fostering a healthier internet to protect consumers. I introduced the first house built on deep Fake Technology called the deep fakes accountability act which would regulate fake videos. Deep fakes can be used to create fake revenge porn and theater the notion of whats real. Ms. Oyama, mr. Huffman, where deep fakes are shared. What are the implications of 230 on your deep fakes policies . Ill go. Thank you for the question. So we release i think with our peers around the same time p prohibitation on deep fake pornography. The challenge we face, of course, is the challenge you raise. Which is the increasing challenge of being able to detect what is real or not. This is where we believe reddits model shans. By empowering our users they often highlight things that suspicious. And through i do believe very strongly that we as a society not just us as platforms have to develop defenses against this sort of manipulation. Because its only going to increase. Ms. Oyama . Thank you. Yes, on youtube our overall policy is a policy against deceptive practices. So theres been instances where weve seen these deep fakes. I think we identified that it was a deep fake and it was removed from the platform. Surfacing authoritative Accurate Information is core to our business, core to our longterm business incentives. I would agree with what mr. Huffman said. One of the things were doing is investing deeply in the academic side, the research side. The Machine Learning side. To open up data sets where we know these are deep fakes and get better at being able to identify what a manipulated. We also have a revenge porn policy for search for users who are victimized by that. We expanded that to include synthetic images as well. Ms. Citron can you talk about 230 on deep fakes monitoring and removal . Section 230 sort of the activities that weve see youtube and reddit engage in are the type of activities that are proactive in the face of clear illegality moving quickly. But the real problem isnt these folks at the table. There are now deep trace lab report two weeks ago showing that eight out of ten of the biggest porn sites have deep fake sex videos. And their Business Model is deep fake sex videos and involve women. Its users posting them. Does the current immunity structure reflect the unique nature of this threat . I dont think the so section 230 as its devised, its at its best. Its to incentivize what we are seeing. But its not the way the plain language is written. It doesnt condition the immunity on being responsible and reasonable. And so you have these outliers that cause enormous harm. It would be in a search of your name. And it is findable and people then contact you and its terrifying for victims. Its really these outlier companies that their Business Model is this kind of abuse. And section 230 is what they point to and they gleeful say, sue me too bad so sad. And thats the problem. What has become an exte existential threat is the rise of hate speech and propaganda on spoeshl media platforms. If 230 were remauved, would platforms be charged with hosting hateful speech . Thank you for the question. I think this is a really important area to show the power and importance of 230. I mean, as you know, there are First Amendment restrictions on government regulation of speech. So there is additional responsibility for Service Providers like us to step up. We have a policy against hate speech. Incitement to violence is prohibited. Hate speech is prohibited. Speech targeting with hate and the take doubs that we do every single quarter through automated flagging. Lawful and possible because of 230. When we take down content, someones content is being taken down. And so they can regularly come back to any Service Provider big or small. They may sue them for defamation. Looking at the equities would be this space to innovate new ways to identify that content and take it down without fear of unmitigated, you know, litigation or legal risk or legal uncertainty. Very well. Thank you very much. I yield back, mr. Chairman. Madam chairman. The gentlelady yields back and now mr. Wahlberg, youre recognized for five minutes. I thank the chairman and praesht the panel being here. Todays hearing has hit home for a lot of us as weve discussed here. The internet is such an amazing, amazing tool. Its brought about great innovation. Connecting people in ways never thought of before. I think truthfully we look forward to what we will see in the future. But these are issues we have to wrestle with earlier. I was able to invite hailey from my district to the state of the union as my guest to highlight her good work that she is doing in my district and roundi isurr areas to help combat cyberbullying. Its very much comp hepsive individual who understands so much as a young person of whats going on. As having a real impact in high schools and in colleges now as a result of her experience and trying to attempt to make some positive things out of it after she almost committed suicide. Thankfully it wasnt successful as a result of cyberbullying. She shined a light on that. So mr. Huff man what are you doing to address cyberbullying on your platforms . Just two weeks ago, we updated our policies around harassment. Its one of the i think most complex or nuanced challenges we face. One of the changes we made was to allow harassment reports not just from the victim but third parties. Basically somebody else sees instances. For example, a teenage struggling with their Sexual Assault has no place to turn. Maybe not their friends or family. So they come to a platform like ours to talk to others in difficult situations. Or people having suicidal thoughts come to our platform. It is our First Priority regardless of the law though we fully support lawmakers in this initiative to make sure that those people have safe experiences on reddit. Weve made a number of changes and well continue to do so in the future. Thank you for the question. And so we would use our policies to help us enforce and either through automated detection, human flags, Community Flags, we would be able to identify that content and take it down. Last year we moved 35,000 videos. And i did want to echo mr. Huffmans perspective that the internet and content sharing is also a really valuable place. It can serve as a life line to a victim of harassment or bullying. We see that all the time. When somebody we want to continue to invest in that important Mental Health resources content like that. Im glad to hear you are continuing to invest and help us as we move forward in this area. Googles ad network has come a long way in the last few years and wont serve ads next to potentially illegal activity. This demonstrates google has been able to identify such activity. Why would it not just take down the content question . That was for ms. Oyama, for you. It is true that on our ad system, we have a risk engine so we prohibit illegal content. Theres many different policies and theyre stricken more than 2 billion every year. So youre taking them down . Absolutely. Before theyre ever able to hit any page. Very kwarly in line with our business interests. That our network, our platforms are safe. They want to you know, our advertisers want us to be serving good ads to good content. One final question. I understand that google offers a feature to put a tag on copyrighted work that would take it down if pirated and uploaded. But the google charges a fee for is the. Can this be alied and why doesnt it offer this for free . Thank you for the question. I think maybe thats a misperception. We have content id which is automated. We have partners across the everything. It is part of our partner program. And it doesnt cost them anything. Many so last year we sent 3 billion based on content id claims of copy wrighted material. They were able to take a majority of the ad revenue with that content and it was sent back out to them. And that system of being able to identify and detect content to then set controls perhaps monetized and served or in the case of violent extremism absolutely blocked is something that powers much of youtube. Thank you. I yield back. The gentleman yields back. And mr. Loebsack, you are recognized for five minutes. Thank you. I want to thank the Ranking Members of the subcommittees for holding this hearing today. I want to thank the witnesses for your attendance as well. This has been very informative. Even if were not able to answer all the questions wed like to be able to answer. And its not the first time our community has examined how it can be a force for innovation and human connection. Which we all enjoy when were making those connections so long as they are positive, obviously. But also criminality. I think everyone assembled here today is clearly very expert in your field. I appreciate hearing from you all today as we consider how section 230 has been interpreted by the courts since its initial passes and what, if any changes we should be considering. I think theres a lot to consider as we discuss the full scope of what section 230 covers from cyberbullying and hate speech whether on facebook, youtube, or elsewhere. To the transaction of harmful substances or weapons. I think the question today is twofold. First, we must ask if content moderators are doing enough. And second, we must ask whether congressional action is required to fix this action. The second one has been referred to obliquely throughout by some of you, by some of us. But i think thats essentially the second question that were really facing today. And after reviewing the testimony youve submitted, we have differences of opinion on whether section 230 is where congress should be focusing its resources. So to begin, id like to ask everyone the same question. This is probably the easiest question to answer and the most difficult. Because its exceedingly vague. What is the difference between good and bad content moderation look like . Start with you, mr. Huffman. Thank you for that philosophically impossible question. But i think theres a couple of easy answers that i hope everybody on this panel would agree with. Bad content moderation is ignoring the problem. And that was the situation we are in pre230. That was the sort of perverse incentives we were facing. I think there are many forms of good content moderation. What is important to used a reddit is twofold. One, empowering our users and communities to set standards of discourse in their communities and among themselves. We think this is the only truly scaleable solution. The second is the what with 230 provides us. Which is the ability to look deeply in our platform to investigate to use some finesse and nuance when addressing new challenges. Thank you. To be about what makes bad what makes content bad or was it what makes content moderation moderation. Whats the difference between good and bad content moderation . Okay. Because thats what were talking about. Yeah, no. Of course. But it precedes the question of why were here. But what kinds of harms get us to the table to say why we should even talk about changing section 230. And i would say whats bad or incredibly troubling is when sites can have which is abuse and harm. That is the worst of the worst. And sites that induce and illicit. The question is how to deal with the problem. Ive got some answers for you, but, if we want to wait to do that. You can submit them to us. I did in the testimony. Thank you. Thank you for the question. I actually think its a great question. And i think as someone who supports Civil Liberties online as a primary goal for us, i think good content moderation is precise, transparent, and careful. What we see far too often is in the name of content moderation and manging sure the internet is safe for everybody, actually all kinds of valuable and lawful content is taken offline. Theres details about this submitted in our testimony. I just point to one example where we have an archive theres an archive of videos attempting. But those are often flagged as violating terms of service. Because they contain horrible material. But the point is to support. Its very difficult for the sfz providers to apparently tell the difference. Thank you. If its illegal in real life, it ought to be illegal online. Content moderation ought to focus on illegal activity. And i think there has been little investment in technology that would improve this for the platforms because precisely because of section 230 immunities. Thank you. I do realize im out of time. Im sorry i have such a broad question of all of you but i would like to get your response if i could the final two witnesses here in writing if i could, please. Thank you so much and i yield back. The gentleman yields back. Now i recognize mr. Carter for five minutes. Thank you, madam chair. Thank you all for being here. I know that you all understand how important this is and i hope that you and i believe you all take it seriously. Thank you for being here and thank you for participating in this. Miss peters, id like to ask you, you pointed out there is clearly quite a bit of illegal in for instance, illegal pharmacies. Where you can buy pills without a prescription. Terrorists that are profiteering off of looted products. Also products from endangered species. Then it even gets worse. You mentioned the sale of human remains and child exploitation. I mean, just gross things, if you will. How much effort do you feel like the platforms are putting into corn t containing this and stopping this . It depends on the plot form. But thats a very good question. Id like to respond to you with thap when was the last time anybody saw a dick pic on facebook . Simple question. If they can keep genitalia off of these platforms, they can keep drugs off of the platforms. The technology exists. These are policy issues. Whether its the policy meant to allow video of nancy pelosi on. Do you ever go to him and express this to him . We are typically told that the firm has that on it. And that in a few years, that ai is going to work. And when weve presented evidence of specific identifiable Crime Networks and Terror Networks, weve been told they get back to us. Are you told they dont want to meet with you . No. Weve usually gotten meetings or call. I dont feel like an effort is being put forward. I dont want the im doing my best to keep the federal government out of this. I dont want to stifle innovation. And im really concerned about that. But at the same time, look. We cannot allow this to go on. Youre responsible. And if you dont do it, then youre going to force us to do it for you. I dont want that to happen. You mention you were getting funding to map wildlife supply chains. Thats when you discovered there was a large retail market. That exist on platforms like facebook and wechat. Have any of these platforms made a commitment to stop this . And if they have, is it working . Is it getting any better . Thats a terrific example to bring up. The number of tech firms have joined a coalition with World Wildlife fun. And have taken a pledge to remove dangerous piece content. Im not aware anything has changed. We have had researchers going online and log iging all the ti. Im going to be fair and im going to let the google im sorry, i cant see that far. Im going to let you respond to that. You feel like youre doing everything you can . Thank you. We can always do more. I think we are committed to always doing more. I know that. I dont need you to tell me that. Weve got to plan and place. Let me tell you what you are doing. For wildlife, we are part of the coalition that ms. Peters mentioned. On the national epidemic, we are hugely committed to helping and playing our part in combatting this epidemic. Theres an online component and offline component. Misuse of opioids originate on the internet. Especially what we have done with Google Search is work with the fda. So they can send a letter if they see theres a link and search for a rogue pharmacy and we will deal that out of search. Theres a really important offline component too. So we work with the dea on prescription takeback day. We featured these places in google maps. Happy to come in and id invite you to do just that. Id like to talk to you further about this. Mr. Huffman, im going to give you the opportunity because my staff has gone on reddit. And they have googled, if you will, or searched for Illegal Drugs and it comes up. I suspect yao going to tell me the same thing. Were working on it. We almost got it under control. Ive got a slightly different answer if youll indulge me. First of all, it is against our rules to have controlled goods on our platform. And its also. We do see content of that. If you have any content we the search bar including their own emails, im sure you would find a hit in your spam folder at least. What would come up is spam first gets removed by our facility irs. There is a lag sometimes by something being submitted and something being removed. Thats how the system works. That said, we do take this issue very seriously. And so our technologies have continued to improve along these lines. And thats exactly the sort of ability that 230 gives us. Its the ability to look for this content and remove it. Now, to the exetent that you or your staff have found this specifically. And the extent its still on our platform, wed be happy to follow up later because it shouldnt be. U. N. Yyou know, my sons are now but i feel like a parent pleading with their child again. Please dont make me have to do this. Thank you, madam chair. I yield back. The gentleman yields back and now i recognize Congress Woman kelly for five minutes. Thank you, madam chair. Thank you for holding this hearing on section 230. The intended purpose of section 230 was to allow companies to moderate content under the Good Samaritan provision and yet this law seems to be widely misapplied. The Good Samaritan provision in section 230 was intended in a good faith in good faith to access our ability of material that the provider or user considers to be obscene, lewd, less civic, filthy, excessively violent, otherwise objectiona e objectionable. Last congress was amended to make platforms liable for any activity related to sex trafficking. Since passage, some have criticized the law for being too ambiguous. In addition to my work on this committee, i look at the caucus. In that capacity, to protect family users in an accountable manner while allowing innovators to innovate. Today as we look to foster a healthy and more consumer friendly internet, it is my hope that our discussion will set the standard of doing so in a responsible and effective balanced way. Professor citron, you discussed giving platforms immunity from liability if they could show their content moderation practices are reasonable. As the chairman referenced, how should Companies Know where the line is or if theyre doing enough . Wheres that line . And the sort of genius of reasonableness is that we it matters and depends on the context. Theres certainly some baseline presumptions, i would say defaults about what would constitute reasonable content moderation practices. And that includes having them. There are some sites that dont engage in it at all. In fact, they absolutely dont engage in moderation and encourage abuse and legality. There are some baseline academic writing. There is a baseline set of speech rules and policies that are practices. Naturally thats going to change depending on the challenge. So were going to have different approaches to different new and evolving challenges. Thats why our reasonableness approach provides the shield. But it does it in exchange for those efforts. And would you agree that any changes we make, we have to ensure that it doesnt further ambiguity . Right. And if i may, what was disappointing to someone who certainly helped some offices work on the language is when you include in the language knowingly facilitate. Thats the moderators i ddilemm. To either sit on your hands or be overly aggressive. My biggest disappointment is how it came out. We almost see ourselves back to prodigy and compuserve. And we are seeing way overly aggressive responses online and we see the doing nothing. I hope we dont do that. Information can start on one platform and jump to another and go viral very quickly. The 2016 election showcased how effective it can be to deter populations. Offensive content is first shared in groups and filtered out. Miss peters, do you believe is Tech Companies to proactively remove content that is rapidly spreading before being flagged by users . I believe that companies knead to moderate and remove content when it concerns a clearly illegal activity. If its illegal in real life, it ought to be to human trafficking. Serious organized crime and designated terror groups should not be given space to operate on our platforms. I also think that cda230 needs to be revised to provide more opportunities for civil and state Law Enforcement excuse me. State and local Law Enforcement to have the legal tools to respond to Illicit Activity. Thats one of the reasons fosta was passed. What steps are you taking beyond machine learn iing at th same content shared 10,000 or 100,000 times. Thank you for the question. So on youtube we are using machines and algorithms once a content is identified and removed our technology prevents it from being uploaded. But to your point working across platforms and across collaboration, it would be to give the foreign counterterrorism, we are one of the founding members. One of the leading players in tech are part of that. One of the things we saw during the christchurch shooting is how quickly this content can spread. We were grateful to see last week the protocols in place. There was a shooting in germany. There was a piece of content that appeared on twitch and the companies were able to engage in the crisis protocol. There was a hush made of the content. It was spread across the companies and that enabled all of us to block it. I know im out of time. Thank you. The gentlelady yields back. Thank you. My first question is for dr. Mcsherry. Yes or no. I understand the eff has argued for language mirroring legislation and trade deals explicitly for baking language in an agreement to protect statute domestically. The statute domestically. Do you see the intent of including such 230like language and agreements is to ensure we may not revisit the statute. No. Okay. All right. Thank you very much. What id like to do is ask that eff the block post from january 23rd, 2018, by Jeremy Malcolm be entered into the record. Without objection so ordered. Thank you, madam chair. Appreciate it. Next question is for mr. Huffman. And ms. Oyama. In april 2018 i questioned Mark Zuckerberg about how soon illegal opioid ads would be removed from their website. His answer was that the ads would be reviewed when they were flagged by users as being illegal or inappropriate. This, of course, is the standard answer in the social media space. However, mr. Zuckerberg also said at the time that Industry Needs to and i quote build tools that proactively go out and identify as for opioids before people even have to flag them for us to review. And that ends the quote. This was significantly in my opinion would cut down the time an illegal ad would be on their website. Again, mr. Huffman and ms. Oyama, it has been a year and a half. People are dying. Im sure you agree with this. Has the industry been actively working on intelligence flagging standards that can automatically identify illegal ads. And then what is the status of his technology and when can we expect implementation if they have been working on it . Whoever would like to go first is fine. Reddit is a little different than our peers in all of our ads go through a strict human review process. Making sure that not only are they on the right side of our content policy, which prohibits the buying and controlling of substances. But our strict ad policy which has a much higher bar to cross because we do not want ads that cause any sort of controversy on our platform. Okay. But i mean, you know, we have to be proactive as far as this is concerned. Mr. Zuckerberg indicated that thats the case. Its the way people are dying. We cant just stand by and have this happen. And have access to these well, in most cases opioids and drugs, different types of drugs. But ms. Oyama, would you like to comment please . We certainly agree with your comment about the need for proactive efforts. In google ads, we have something called a risk engine that help us us identify if an ad is bad when it comes into the system. Last year in 2018 we kicked out 2. 3 billion ads out of our system for violating our policies. For any prescription thats also independently verified by an independent group. That would also need to be verified by them. Of course in the specific case of opioidopioids, theres a lot important work that weve done with the dea, the fda. Even with pharmacies like cvs offline to help them promote things like take back your drugs day where people can take opioids in and drop them off so theyre not misused later on. One of the things weve seen is the vast majority more than 99 of opioid misuse happens in the offline world. So from a doctor thats prescribing it or Family Member or friend. And so using technology to also educate and inform people that might be victimized from this is equally important to some of the work were doing in the ad space. Okay. How about anyone else on the panel . Would they like to comment . Is the industry doing enough . I dont think the industry is doing enough. Theres an enormous amount of drug sales taking place on google groups, instagram, facebook groups. The groups on these platforms are the epicenter. This is why industry has to be monitoring this. If you leave this up to users to flag it, and theyre inside a private or secret group, its just not going to happen. These firms know what theyre getting up to. Theyre monitoring all the time. They can figure this out. Can i also add there are two issues here. There are the ads, but also the negative content. You heard ms. Peters say she searched on reddit this morning and that content is there even if its not in the ads. Same is true of Google Search. Theres two places you have to worry about these things, not just the ads. Very good. Thank you. I yield back. Gentleman yields back. Now i call on the chairman for five minutes. Thank you. I wanted to start with ms. Oyama. In your written statement, you discussed youtubes Community Guidelines for hate speech. Im concerned about news reports that hate speech and abuse is on the rise on social media platforms. To removing hate speech, if you will. Thank you so much for the question. On the category of hate speech, we have a clear policy about it. That would be speech that incites violence or speech that is hateful against groups with specific attributes. So that could be speech based on their race, their religion, their sex, their age, their disability status, their veteran status. And so that is prohibits. It can be either detected by our machines which is the case in more than 87 . By Community Flaggers. By individual users. And all of those actions that we take, last quarter we saw a 5x increase in the amount of content our machines were able to find and remove. Those removals are vitally dependent on the protection and cda 230 to give Service Providers the ability to fight bad content and take it down. We do have claims against us when we remove hate speech. They may have other legal claims. And 230 is what unables not only google or youtube but any sight with user generated comment. I think we would just encourage congress to think about not harming the good actors, the innocent actors that are taking these steps. After a truly bad actor. They should be penalized and Law Enforcement will play an Important Role in i think bring them down as they did with back page that was taken down or in civil cases like roommates. Com where there is liability for bad actors that break the law. Thank you. Dr. Fareed, in your written testimony you write the internet has led to domestic and international terrorism. Both criminal and civil liability associated with providing Material Support for terrorism. But i want to start with dr. Mcsherry. 230 doesnt apply to federal law. How do social media Companies Use 230 to shield themselves from civil liability to be used as propaganda and recruitment platforms with regard to civil liability. So there are ongoing cases and have been several cases where platforms have been accused of violating civil laws for hosting certain kinds of content on their platforms. And theyve invoked section 230 in those cases quite successfully. And i think thats not if you look at the facts of a lot of these cases, thats actually quite appropriate. Its the reality is its very difficult for a platform to always be able to tell in advance always draw the line in advance between content thats talking, thats simply protected political communications. And content that steps over a line. And so these cases are hard and theyre complicated and have to get resolved on their facts. Section 230 also creates a space in which because of because the additional protections it provides, it creates a space for Service Providers when they choose to to moderate. And enforce their own policies. Let me ask you to go back to dr. Fareed. Do you have any thoughts how this should be addressed from a technological perspective . I want to start by saying, you know, when you hear about the moderation happening today. Weve heard it from google and reddit. That has only come after intense pressure. It has come from pressure from advertisers. It has come from pressure on capitol hill. It has come from pressure in the eu. It has come from pressure from the press. So theres bad news, bad pr, then we start getting serious. For years we have been struggling with social Media Companies to do more about extremism and terrorism online. And we have hit a hard wall. And then the eu started putting pressure. Capitol hill started putting pressure. Advertisers started putting pressure and we started getting responses. I think this is exactly what this conversation is about. What is the underlying factor . Selfregulation of trust us well do everything is not working. The pressure has to come from other avenues. And i think putting pressure by modest changes to cda 230 is the right direction. And i agree with ms. Oyama. If these are good actors, then they should encourage that change and help us clean up and deal with the problems that we are dealing. But i do, you know, ive been in this fight for over a decade now. And its a very consistent pattern. You deny the problem exists. You minimize the extent of it. You deny the get enough pressure and then we start making changes. We should just skip to ten part of that and recognize we can do better and lets start doing better. Thank you. Thank you, madam chair. Gentleman yields back. Now i recognize for five minutes, congressman jim forte. Thank you. About 20 years ago i harnessed the power of the internet to launch a business to improve customer service. That company was called right now technologies, and from a Spare Bedroom in our home we eventually grew that business to be one of the largest employers in montana. We had about 500 highway jobs there. The platform we created had about 8 million unique visitors per day and i understand how important section to retartion. However its gotten mixed up with complaints about viewpoint discrimination. I want to cite one particular case. In march of this year, mizzoula based Rocky Mountain reached out to my office because google had denied one of their advertisements. The foundation did what it had done many times. They had tried to use paid advertising on the Google Network to promote a short video about a father hunting with his daughter. This time, however, the foundation received an email from google and i quote, any promotions about hunting practices, even when they are intended as a healthy method of population control, or conservation, is considered animal cruelty and deemed inappropriate to be shown on our network. The day i heard about this, i sent a letter to google, and you were very responsive. But the initial position taken was absurd. Hunting is a way of life in montana and many parts of the country. Im very thankful that you worked quickly to reverse that but i remain concerned googles ability to stifle Rocky Mountain elk and other groups have faced similar. We dont know how many hunting ads google has blocked in five years. In my harbor letter i invited googles ceo to meet with leaders of our Outdoor Recreation businesses in montana. I havent heard anything back, and i would extend the invitation again. I think, frankly, it would help google to get out of silicone valley, come to montana, sit down with some of your customers and hear from them directly about things that are important to them. I would be happy to host that visit. We would love to meet with you there. I think its important to understand the work that these groups do to further conservation. And to help species thrive. And as an avid hunter and outdoorsman myself i know many businesses in montana focus on hunting and fishing. And i worry that they may be denied the opportunity to advertise on one of the Largest Online platform that you built, to your credit. I also worry that an overburdensome Regulatory Regime could hurt Small Businesses and survival montanas rapidly growing hitech sector. So the invitation is open. Doctor, one question for you, how can we walk this line between protecting Small Business and innovation versus overburdensome regulations . Right now we have near monopolies in the Technology Sector. If we regulate now Small Companies coming up now cant compete. Theres ways to create carve uses. In the eu and urk they are creating carve outs for small platforms that had 8 million versus 3 billion users. We want to tread lightly here. Miss peters made the point we want to inspire competition. But they are mechanisms to do that. We just have to think carefully about that. We had discussions today to get criminal activity off the network. I applaud that. But as a follow on, doctor, how do we ensure that content moderation doesnt become censorship and a violation of our First Amendment . So, the way we have been thinking about content moderation is a collaboration between humans and computers. What computers are very good at doing is the same thing over and over again. What they are not good at is nuance and inference and content. The way content moderation work, human moderators say this is a child, this is sexual explicit, we fingerprint that content and we move that piece of content. Photo dna we developed about a decade ago is one in 50 billion. So if youre going to Deploy Technology you have Automatic Technology you have to operate at a high school. Computers cant do that on their own. We need more human moderators. You heard from google. 500 hours of video uploaded a minute. Those moderators have to look at hours of hours of video per hour. Thank you. I look forward to see you in montana. I yield back. The gentleman yields back. And now i recognikoning woman rochester is next for five minutes. Thank you, madam chairwoman and to the chairman and Ranking Members thank you for holding this important hearing. I think many of us here today are seeking to more fully understand how section 230 of the Communications Decency act can work well in an ever changing virtual and technological world. This hearing is really significant and as was said i do not want us to forget the important things that the internet has provide to us from moments, to applications, to tick tock. But also as mr. Huffman said we and you applied it to reddit it price to all of us, we must constantly be evolving, our policies must be evolving to face the new challenges and balance it with our Civil Liberties. My question is surrounded around the question about bad content moderation. I want to start off by saying that the utilization of Machine Learning algorithms and Artificial Intelligence provides an important technological solution to increasing the amount of content to moderate. However, as we become more and more reliant on algorithms were increasingly finding blind spots and gaps that may be difficult to breach with simply more and better code. I think theres a real concern that groups already facing prejudice and discrimination will be further marginalized and censored. As i thought about this i even thought about groups like the veterans or the Africanamerican Community in the 2016 elections. Doctor, can you describe some of the challenges with moderation by algorithm, including possible bias . Yes. I think youre right. When we automate at the scale of the internet were going to have problems. Weve already seen that. We know for example face recognition does much worse on women, people of color than it does on white men. The problem with the you a the to m automatic if your algorithm is 99 accurate which is very, very good youre still making 1 in 100 mistakes. Thats tens of millions of mistakes youre making. So the underlying idea that we can fully automate this not to take on the responsibility and expensive hiring human moderators doesnt work. I fear that we have moved too far to give us time to find ai algorithms because we dont want to hire the human moderators because of the expense. We know today that will not work in the next year, two year, fire years, ten years. Its worse than that because it assumes an adversary thats not adapting. We know the adversary will adata point. We know, for example, alma shin learning algorithms today meant to identify content are vulnerable to adversarial attacks. You can add a small amount of information and fool the systems. I want to ask a quick question. You talked about the number of human moderators that you have available to you. And i know that weve had many hearings on challenges of diversity in the tech field. Im assuming mr. Huffman yours are more from the user perspective in terms of moderators or are they people that you hire and the 10,000 or so that you mentioned. These are people that you hire, or are they users, just a quick, so Everybody Knows users . Combination . For us its about 100 employees out of 500. Of course millions of users participated as well. Thats what i thought. 10,000 i mentioned is mixture of fulltime employees. We work with specialized vendors and Community Flag which can be a an ngo, Law Enforcement. I know in tint of time i dont have a lot of time, but could you provide us with information on the diversity of your moderators . Thats one of my questions. And then also i dont like to make assumptions but ill assume it might be a challenge to find diverse populations of individuals to do this role, what youre doing in that vein. So if we can have a follow up with that. Then my last question will be for the panel, what should the federal government, what should we be doing to help in this space. Because im concerned about the capacity to do this and do it well. If anybody has any suggestion, recommendation. Mr. Freed is already pushing his button. I think this conversation is helping. I think youll scare the technological sector and thats a good thing to do. I have to yield back. Im out of time. Thank you so much for your work. Gentle woman yields back. Now last but not least, representative soto youre recognized for five minutes. Thank you, madam chair warm. First of all, thank you for being here. Im the last one, so were in the homestretch here. Its amazing that were here today. When we think about how far the internet has progressed. One of the greatest inventions in human existence connecting the world, giving billions a voice while before their stories would never be told providing knowledge at our fingertips. Its incredible. We know section 230 has been a big part of it. Providing safe harbor, essentially the dam holding back a flood of lawsuits. Created innovation, but its also created a breeding ground for defamation and harassment, for impersonation, election interference and breeding ground for white supremacists, disinformation, global terrorism and other extremism. We have these wonderful gifts on one side and then all the terrible things with humanity on the other side. My concern is lies spread faster than the speed of light on the internet while truth seems to go at a snails pace on it. Thats one thing i constantly hear from my constituents. I want to start with basics just so i know everybodys opinion on it. Who do you all think should be the cop on the beat to be the primary enforcer with the choices being fcc, ftc or the courts. I would like to hear what each of you think on that. If those are my only three options you can give a fourth. I think in the United States society and our platform our users. Okay. Who do you think should be the cop on the beat . Ill take the courts. Because it forces in some sense the Companies Actually to be the norm producers. Okay. Yes. So i think the courts have a very important petroleum play but also cardinal principle for us at eff is at the end of the day users should be able to control their internet experience. We need to have many, many more tools to make that possible. I think thats a ridiculous argument. The vast majority of people i study organized crime. Lets get back hold on ill answer the questions. Courts and Law Enforcement. Most people are good. A small percentage of people statistically in any community commit crime. You have to control for it. Thank you. I want to point out that the courts and the ftc do have jurisdiction. As you know the ftc has broad jurisdiction over Tech Companies already and courts are always looking at the outer contours of 230. Thank you. I agree. We all have a responsibility. And if we were to tighten up rules on the courts it would being a great to hear first starting with you, if we limit it to junctive relief, is that enough . Please attorney im not a policymaker or a lawyer, im a technologist. Im not the one who should answer that question with due respect. Would adjunctive relief be enough i would echo Small Businesses and start up voices that say the framework created certainty and thats essential for their content moderation and economic viability. Mr. Huffman . Similar answer, sir. I would shutter to think what would happen when we were smaller or now on the receiving end of armies of tort lawyers. Injunctive relief . As you say, injunctive relief. All i can see is the First Amendment and prior restraints. We need to be careful the kinds of remedies we think about. But law operates if we allow law operate, if people act unreasonably and recklessly the array of possibility should be available. Lastly i want to talk a little bit about 230, section 230. Im from orlando, the land where fictional mouse and a fictional wizard are two of our greatest assets. Miss peters, i know you talked a little bit about the issue of including 230 in trade deals. How would that be problematic for a region like ours with intellectual property is so kri critical. Its problematic because it will tie Congress Hands from reforming the bill down the line and thats why precisely industry is pushing it. Theres 90 pages of copyright in existing trade agreements. 230 can be treated the same as u. S. Law. If we adjusted laws here it would affect the trade deals . Theres no language in the trade deals that binds congress hand. Congress regularly has hearings on patents, climate, theres nothing in the trade agreement of u. S. Law to create a u. S. Framework when countries like china and russia are developing their own frame works for the internet. Theres nothing in the current ism c that would limit your ability to later look at 230 and decide it needed tweaks later on. Thanks. Ill yield back. Gentleman yields back. That concludes our period for questioning, and now i seek unanimous consent to put into the record a letter from creative future with attachments, a letter from American Hotel and lodging associates, a letter from Consumer Technology association, a letter from travel technology association, a white paper from airbnb, a letter from common sense media, a letter from computer and Communications Industry association. A letter from representative ed grace. A letter from in support of the plan act. A letter from the i too coalition. A letter to the fcc from representative jan forte. A letter from attack freedom. A letter from the internet association. A letter from the wiki media foundation. A letter from the picture association. An article titled searching for help. A statement from our street. Without objection so ordered. And let me thank let me thank our witnesses. I think this was a really useful hearing. I think those of you who have suggestions, more concrete ones than some that came up today, our committee would appreciate it very, very much. Im sure the joint committee would appreciate that as well. This joint hearing. So i want to thank all of you so much for your thoughtful presentations, and for the written testimony, which also often went way beyond what we were able to, to hear today. And so i want to remind members that pursuant to Committee Rules they have ten Business Days to submit additional questions for the record to be answered by witnesses who have questions and ask witnesses to respond promptly to any questions you may receive. At this time the committee, committees are adjourned. Thank you. [ hearing adjourned ] a look at some of our live Coverage Today here on cspan 3. Senator chris murphy of connecticut talks about Foreign Policy and National Security. Hell be at the Hudson Institute starting at noon eastern. Then coming up at 3 15 eastern, nasa administrator jim brightnstine commemorates the 50th anniversary of the moon landing and the International Accomplishments of space exploration. Again thats at 3 15 eastern today. Then at 5 lone 30 eastern the role of congress in National Security and foreign affairs. Three representatives join the conversation hosted by the nyu school of law and you can find these events live on cspan. Org or listen with the free cspan radio app. The new c pan ipso survey found 60 of americans want to amend the u. S. Constitution and elect a president by popular vote than the electoral process. A third of republicans support the change 84 of democrats and nearly twothirds of independents favor the popular vote for president. Americans do not want to change the way votes are counted in most states and localities with a person with the most votes wins even if they do not receive a majority of the votes. This winner takes all systems still has the support of 61 of americans. Only 37 want to change to a rank choice system such as the one recently introduced in maine in which voters second choice candidates are taken into account if no candidate gets a majority of the votes. Support for rank choice systems is strongest amongst independents but still under half among that group. Can you read the full results on these issues and others such as americans views on voting discrimination and voter fraud at cspan. Org. On thursday, 2020 republican president ial candidate former governor bill weld of massachusetts spoke to Dartmouth College republicans in new hampshire. This is about 45 minutes. All right. Were going to go ahead and get started here. Thank you guys all for coming. My name is paul schneider. Im the acting chairman for the Dartmouth College republicans and tonight the Dartmouth College republicans are pleased to host governor bill weld on campus. A harvard andfo