comparemela.com

The committee will now come to order. The chair now recognizes himself for five minutes for an opening statement. Online content moderation has largely enabled the internet experience that we know today, whether its looking up restaurant reviews on yelp, catching up on snl on youtube, or checking in on a friend or a loved one on social media. These are all experiences that weve come to know and rely on. And the platforms we go to to do these things have been enabled by User Generated Content as well as the ability of these companies to moderate that content and create communities. Section 230 of the Communications Decency act has enabled that ecosystem to evolve. By giving Online Companies the ability to moderate content without equating them to the publisher or speaker of that content, weve enabled the creation of massive online communities of millions and billions of people to come together and interact. Today this committee will be examining that world that section 230 has enabled, both the good and the bad. Id like to thank the witnesses for appearing before us today. Each of you represents important perspectives related to the content moderation and the online ecosystem. Many of you bring up complex concerns in your testimony, and i agree that this is a complex issue. I know that some of you have argued that congress should amend 230 to address things such as online criminal activity, disinformation, and hate speech, and i agree. These are serious issues. Like too many other communities, my hometown of pittsburgh has seen what unchecked hate can lead to. Almost a year ago our community suffered the most deadly attack on Jewish Americans in our nations history. The shooter did so after posting a series of antisemitic remarks on a fringe site before finally posting that he was going in. A similar attack occurred in new zealand, and the gunman streamed his despicable acts on social media sites, and while some of these sites move to quell the spread of that content, many didnt move fast enough, and the algorithms meant to help sports highlights and celebrity selfies go viral helped amplify a heinous act. In 2016 we saw similar issues when foreign adversaries used the power of these platforms against us to disseminate disinformation and foment doubt in order to sow division and instill distrust in our leaders and institutions. Clearly, we all need to do better, and i would strongly encourage the witnesses before us that represent these Online Platforms and other major platforms to step up. The other witnesses on the panel bring up serious concerns with the kind of content available on your platforms and the impact that content is having on society, and as they point out, some of those impacts are very disturbing. You must do more to address these concerns. That being said, section 230 doesnt just protect the largest platforms or the most fringe websites. It enables comment sections on individual blogs, people to leave honest and open reviews, and free and open discussion about controversial topics. The kind of ecosystem that has been enabled by more open online discussions has enriched our lives and our democracy. The ability of individuals to have voices heard, particularly marginalized communities cannot be understated. The ability of people to post content that speaks truth to power has created political movements in this country and others that have changed the world we live in. We all need to recognize the incredible power this technology has for good as well as the risks we face when its misused. I want to thank you all again for being here, and i look forward today to our discussion, and id now like to yield the balance of my time to my good friend ms. Matsui. Thank you mr. Chairman, i want to thank the witnesses for being here today. In april 2018 Mark Zuckerberg came to congress and said it was my mistake and im sorry when pushed abiliout facebooks role allowing russia to interfere in the 2016 president ial election. Fast forward 555 days, i fear that mr. Zuckerberg may not have learned from his mistake. Recent developments confirm what we have all feared, facebook will continue to allow ads that push falsehoods and lies once again making its online ecosystem Fertile Ground for election interference in 2020. The decision to remove blatantly false information should not be a difficult one. The choice between deep fakes, hate speech, online bullies and a factdriven debate should be easy. If facebook doesnt want to play referee about the truth in political speech, then they should get out of the game. I hope this hearing produces a robust discussion because we need it now more than ever. Mr. Chairman, i yield back. Thank you. Thank you, general lady yields back. Chair now recognizes the Ranking Member of the subcommittee for five minutes for his opening statement. Thank you mr. Chairman for holding todays hearing, and thank you very much to our witnesses for appearing before us. And again, welcome to todays hearing on content moderation and a review of section 230 of the Communications Decency act. This hearing is a continuation of a serious discussion we began last session as to how congress should examine the law and ensure accountability and transparency for the hundreds of millions of americans using the internet today. We have an excellent panel of within witnesses that represent a balanced group of stakeholders. They range from large to Small Companies as well as academics and researchers. Im not advocating that Congress Repeal the law nor am i to advocate that congress consider niche carveouts that could lead to a slippery slope that some would argue in the internet industry if the law was repealed. Before we discuss whether or not congress should make modest nuanced modifications to the law, we should first understand how we got to this point. It is important to look at the section 230 in context and when it was written. At the time the decency portion of telecom act of 1996 included other prohibitions on objectable or lewd content. Provisions that were written to target obscene content were struck down by the supreme court, but the section 230 provisions remained. Notably cda 230 was intended to encourage internet platforms that Interactive Computer Services like America Online to proactively take down offensive content. As chris cox stated on the house floor, we want to encourage people like prodigy like compuserve, like America Online, like the new Microsoft Network to do Everything Possible for us the consumer to help us control at the portals of our computer, at the front door of our house what comes in and what our children see. It is unfortunate, however, that the courts fixed such a broad interpretation of section 230 simp simply granting a broad liability without platforms having to demonstrate that they are doing Everything Possible instead of encouraging use numerous platforms have hidden behind the shield and used procedural tools to avoid litigation without having to take responsibility. Not only are Good Samaritans sometimes being selective in taking down harmful or illegal activity, but section 230 has been interpreted so broadly that bad smar thamaritans can skate without accountability. Thats not to say all platforms never use the dools. Many do great things. Many maintain billions of accounts annually. Oftentimes these instances are the exception not the rule. We will dig deeper into those examples and learn how platform decide to remove content, whether its with the tools provided by section 230 or with their own selfconstructed terms of service. Under either authority we should be encouraging enforcement to continue. Mr. Chairman, i thank you for holding this important hearing so that we can have an open discussion on congresss intent of cda 230 and if we should reevaluate the law. We must ensure the platforms are held reasonably accountable for activity on their platform without drastically affecting the innovative startups. With that mr. Chairman, i yield back the balance of my time. Gentleman yields back. I should have mentioned this is a joint hearing between our committee and the committee on Consumer Protection and commerce, and i would like to recognize the chair of that committee for five minutes. Thank you mr. Chairman, and good morning and thanks all the panelists for being here. Today the internet certainly has improved our lives in many, many ways, and enabled americans to more actively participate in society, education, and commerce. Section 230 of the Communications Decency act has been at the heart of the United States internet policy for over 20 years. Many say that this law allowed free speech to flourish allowing the internet the internet to grow into what it is today. In the early days of the internet, it was intended to encourage Online Platforms to moderate User Generated Content, to remove offensive, dangerous, or illegal content. The internet has come a long way since the law was first enacted. The amount and sophistication of User Postings has increased exponentially. Unfortunately, the number of americans who report experiencing extremism, extreme Online Harassment, which includes sexual harassment, stalking, bullying, and threat the of violence have gone up over last two years. 37 of users say that they have experienced that this year. Li likewise extremism, hate speech, election interference, and other problematic content is proliferating. The spread of such content is problematic. Thats for sure, and actually, causes some real harm that multibillion Dollars Companies like facebook, google, and twitter cant or wont fix. And if this werent enough cause for concern, more for profit businesses are attempting to use section 230 as a liability shield actively that they can that they have nothing to do with thirdparty content or content moderation policy. In a recent Washington Post article, uber executives seemed to be opening the door to claims to claiming vast immunity from lab labor, criminal, and local traffic liability based on section 230. This would represent a major unraveling of 200 years of social Contract Community social contracts, community governance, and congressional intent. Also at issue is the federal trade commissions section 5 authority on unfair or deceptive practices. The ftc pursues section 5 cases on website generated content, but the terms of Service Violations for thirdparty content may also be precluded by the 230 immunity. I wanted to talk a bit about injecting 230 into trade agreements. It seems to me that weve already seen that now in the japan trade agreement, and there is a real portion to include that now in the mexico canada u. S. Trade agreement. There is no place for that. I think that the laws in these other countries dont really accommodate what the United States has done about 230. The other thing is we are having a discussion right no, an important conversation about 230, and in the midst of that conversation because of all the new developments, i think it is just inappropriate right now at this moment to insert this Liability Protection into into trade agreements, and as a member of the working group that is helping to negotiate that agreement, i am pushing hard to make sure that it just isnt there. I dont think we need to have any adjustment to 230. It just should not be in trade agreements. So all of the issues that we are talking about today indicate that there may be a larger problem that 230 no longer is achieving the goal of encouraging platforms to protect their users, and today i hope that we can discuss holistic solutions. Not talking about eliminating 230, but having a new look at that in the light of the many changes that we are seeing into the world of big tech right now. We want to i look forward to hearing from our witnesses and how it can be made even better for consumers, and i yield back, thank you. General lady yields back. Chair now recognizes the Ranking Member of the committee, ms. Mcmorris rogers. Good morning, welcome to todays joint hearing on online content management. As a republican leader on the Consumer Protection and commerce subcommittee, its my priority to protect consumers while preserving the ability for Small Businesses and startups to innovate. In that spirit, today we are discussing Online Platforms in section 230 of the Communications Decency act. In the early days of the internet, two companies were sued for content posted on their website by users. One company sought to moderate content on their platform. The other did not. In deciding these cases, the court found the company that did not make any content decisions was immune from liability, but the company that moderated content was not. It was after these decisions that congress created section 230. Section 230 is intended to protect, quote, Interactive Computer Services from being sued over what users post while also allowing them to moderate content that may be harmful, elicit or illegal. This Liability Protection has played a critical and Important Role in the way we regulate the internet. Its allowed Small Businesses and innovators to thrive online without the fear of frivolous lawsuits from bad actors looking to make a quick buck. Section 230 is also largely misunderstood. Congress never intended to provide immunity, only to websites who are quote neutral. Congress never wanted platforms to simply be neutral conduits but in fact wanted platforms to moderate content. The Liability Protection also extended to allow platforms to make good faith efforts to moderate material that is obscene, lewd, excessively violent, or harassing. There is supposed to be a balance to the use of section 230, small Internet Companies enjoy a safe harbor to innovate and flourish online while also Incentivizing Companies to keep the internet clear of offensive and violent content by empowering these platforms to act and to clean up their own site. The internet also revolutionized the freedom of speech by providing a platform for every american to have their voice heard, and to access an almost infinite amount of information at their fingertips. Medium and other online blogs have provided a platform for anyone to write an oped. Wikipedia provides free indepth information on almost any topic you can imagine through mostly user generated and moderated content. Companies that started in dorm rooms and garage rooms are now global power houses. We take great pride in being the Global Leader in tech and innovation, but while some of our Biggest Companies certainly have grown, have they matured . Today its often difficult to go online without seeing harmful disgusting or somewhat illegal content. To be clear, i fully support free speech and believe society strongly benefits from open dialogue and Free Expression online. I know that theres been some calls for Big Government to mandate or dictate free speech or ensure fairness online, and its coming from both sides of the aisle. Though i share similar concerns that others have expressed that are driving some of these policy proposals, i do not believe these patroroposals are consist with the First Amendment. Republicans successfully fought to repeal the fccs fairness dock tr doctrine for broadcast regulation. I strongly caution against advocating for a similar document online. It should not be the fcc, ftc or any government agencys job to moderate free speech online. Instead, we should continue to provide oversight of big tech and their use of section 230 and encourage constructive discussions on the responsible use of content moderation. This is a very important question that were going to explore today with everyone on the panel. How do we ensure that companies with enough resources are responsibly earning their Liability Protection. We Want Companies to benefit not only from the shield but also use the sword congress afforded them to rid their sightes of harmful content. I understand its a delicate issue and certainly very nuanced. I want to be very clear, im not for gutting section 230. Its essential for consumers and entities in the internet ecosystem, misguided and hasty attempts to amend or even repeal section 230 for bias or other reasons could have unintended consequences for free speech and the ability for Small Businesses to provide new and Innovative Services but at the same time, its clear weve reached a point where its incumbent upon us as policymakers to have a serious and thoughtful discussion about achieving the balance on section 230. I thank you for the time and i yield back. General lady yields back. Chair recognizes mr. Pollone. The internet is one of the single greatest human innovations. It promotes Free Expression, connections, and community. It also fosters Economic Opportunity with trillions of dollars exchanged online every year. One of the principle ralaws tha paved the way for the internet to flourish is section 230 of the Communications Decency act which of course passed as part of the Telecommunications Act of 1996. We enacted this section to kbif platforms the ability to moderate their sites to protect consumers without excessive risk of litigation, and to be clear, section 230 has been an incredible success. But in the 20 years since section 230 became law, the internet has become more complex and sophisticated. In 1996, the Global Internet reached only 36 million users or less than 1 of the worlds population. Only one in four americans reported going online every day. Compare that to now when nearly all of us are online almost every hour that were not sleeping. Earlier this year the internet passed 4. 39 billion users worldwide, and here in the u. S. There are about 230 million smartphones that provide americans instant access to Online Platforms. The internet has become a central part of our social, political, and economic fabric in a way that we couldnt have dreamed of when we passed the Telecommunications Act. With that complexity and growth weve seen the darker side of the internet grow. Online radical zags has spread leading to Mass Shootings in schools, churches, and movie theaters. Platforms have been used for the illegal sell of drugs including those sparked the Opioid Epidemic. Fra fraudsteres have pursued political stories that have sewed faith. Most despicable of all is the growth and exploitation of children online. In 1998, last year 45 million photo and video reports were made. While platforms are now better at detecting and removing this material, recent reporting thoughs that Law Enforcement officers are overwhelmed by the crisis. These are all issues that we cant ignore and Tech Companies need to step up with new tools to help address these serious problems. Each of these issues demonstrates how online content moderation has not stayed true to the the values underlying section 230 and has not kept pace with the increasing importance oof a Global Internet. And theres no easy solution to keep this content off the internet. As policy maker, im sure we all have our ideas about how we might tackle the symptoms of poor content moderation online while also protecting free speech, but we must seek to fully understand the breadth and depth of the internet today, how its changed, and how it can be made better. Its with that in mind that i was disappointed that ambassador lighthizer, u. S. Trade representative, refused to testify today. The u. S. Has included language similar to section 230 in the United Statesmexicocanada agreement and the u. S. Japan trade agreement. Ranking member walden and i wrote raising concerns about why the language is included in trade deals as we debate them across the nation and i was hoping to hear his perspective on why he believes that was appropriate. Including trade agreements that are controversial to democrats and republicans is not the way to get support from congress. So, hopefully the ambassador will be more responsive to bipartisan requests in the future. With that, mr. Chairman, i will yield back. Gentleman yields back. Chairman would like to remember members all Opening Statements shall be made part of the record. Can mine be made part of the i apologize. The chair now yields to my good friend, the Ranking Member. Oh, how times have changed. For five minutes. Thank you, mr. Chairman. I want to welcome our witnesses today. Thank you for being here. Its really important work and i will tell you at the outset weve got another subcommittee meeting upstairs ill be bouncing in between. But ive got all your testimony and look forward to your comments. Its without a question a balance roster of experts in this field. Last congress we held significant hearings that jump started the discussion of the state of online protections as well as the legal basis underpaying the modern internet ecosystem as youve heard today and of course the future of content modernization as algorithms determine what we see on online. Thats an issue our constituents want to know more about. Today well undertake a deeper review of the section 230 of the portion of the 1996 Telecommunications Act. In august of this year, chairman pallone and i raised the issue of language mirroring section 230 in trade agreements. We did that in a letter to the trade representative, robert light hugheser. We expressed concerns of the content being taken out of content and in the future the trade representative should consult our committee in advance of negotiating on these very issues. Unfortunately, we have learned that derivative language of section 230 appeared in an agreement with japan and continues to be advanced in other discussions. Were very frustrated about that and i hope the administrations paying attention and listening because they havent up to this point on this matter. Ustr does not appear to be reflecting the scrutiny. The administration itself says theyre applying to how it is being utilized in american society. That makes it even more alarming for the ustr to be exporting such policies without the involvement of this committee. Be clear, this section of the 96 telecom act served as a foundation. So, were here to understand what truly is and see that the entire toifts section is faithly followed rather than cherry picking just a portion. I want to go back to the trade piece. I thought the letter to the ambassador was going to send the right message. Were not trying to blow up ustr. Im a big free trader, but were getting blown off on this and im tired of it. So, let it be clear. Then we found out its in the japan agreement. So, clearly theyre not listening to our committee or us. So, were serious about this matter. Not heard from the ustr and this is a real problem, so take note. If we all refer to section 230 as the 26 words that created the internet as has been popularized by some, were already missing the mark since by our word count which you can use software to figure out, that excludes the Good Samaritan obligations in section c 2. So, we should Start Talking more of that section as the 83 words that can preserve the internet. All of the sections and provisions should be clearly taken together and not apart. Many of our concerns can be readily addressed if Companies Just enforced their terms of service. I believe a quick History Lesson is in order. Todays internet loobs a lot different than the days of comp serve and prodigy. While the internet is more dynamic and content rich than ever before, there were problems in its infancy managing the vast amounts of speech occurring online. As our friend chris cox, the alum of this committee pointed out during his debate on the amendment, no matter how big the army of bureaucrats its not going to protect my kids because i dont think the federal government will get there in time. The congress recognized we need companies to step up to the plate and curve harmful content from the platforms. Upon enactment of cda 230 bestowed on providers and users, the ability to go after the illegal and harmful content without fear of being held liable in court. While the law was intended to empower, we have seen social media platforms slow to clean up sites while being quick to use immunity from legal responsibility for such content. Internet platforms have sherked the responsibility for content on their platforms. The broad in place through common law has obscured the central bargain that was struck and that is the internet platforms are protected from liability in exchange for the ability to make good faith efforts to moderate harmful and illegal content. Let me repeat for those that want to be included in the Interactive Computer Services definition, enforce your own terms of service. I look forward to an informative session today on the differentiating protective speech from illegal content, how we should think, and the elements the internet shapes what consumers see or dont see. Mr. Chairman, thank you for having this hearing, i look forward to getting the feedback from the witnesses. So, the administration doesnt listen to you guys either, huh . My statement spoke for itself pretty clearly. Well find out if theyre listening or not. Gentleman yields back. All members written Opening Statements will be made part of the record. We want to introduce our witnesses for todays hearing, mr. Steve huffman, cofounder and ceo of reddit. Welcome. Ms. Danielle citron. Ms. Gretchen peters, welcome. Ms. Katherine oyama, welcome. And dr. Hany farid. Welcome to all of you. We want to thank you for joining us today. We look forward to your testimony. At this time the chair will recognize each witness for five minutes to provide their opening statement. Before we begin i would like to explain our lighting system. In front of you is a series of lights. The light will initially be green at the start of your opening statement. The light will turn yellow when you have one minute remaining. Please wrap up your testimony at this point. When the light turns red, we just cut your microphone off. We dont, but try to finish before then. Mr. Huffman, were going to start with you and youre recognized for five minutes. Thank you. Good morning. Chairpersons, Ranking Members, members of the committee, thank you for inviting me. My name is steve huffman. Im the cofounder and ceo of reddit and im grateful to share why 230 is critical to our company and the internet. Reddit monitors content differentlien that other platforms. We imply communities and this approach relies on 230. Changes in 230 pose an existential threat not just to us but to thousands of start ups across the country and destroy what competition remains in our industry. My College Roommate and i started reddit in 2005 as a forum to find news and interesting content. Since then its grown into a vast Community Driven site where millions of people find not just news and a few laughs but real perspectives. Reddit is communities, communities that are created and moderated by our users. Our model has taken years to develop with hard Lessons Learned along the way. I left the company in 2009 and for a time reddit lurched from crisis to crisis over questions of moderation that were discussing today. In 2015 i came back because i realized the vast majority of communities were providing invaluable experience to users and reddit needed a better approach to moderation. The way reddit handles content moderation is unique. Everyone follows a set of rules, has the ability to vote and selforganize, and ultimately shares some responsibility for how the platform works. First we have our content policy. Fundamental rules that everyone on reddit must follow. Think of these as federal laws. We Employee Group of scientists known as the antievil team to enforce these policies. Below that, each Community Creates their own rules, state laws if you will. These rules written by volunteer moderators themselves are taylored to unique needs of communities. The selfmoderation our users do every day is scalable to challenges of moderating content online. Individual users play a usual role as well. They can vote up or down on any piece of content, post or comments, and report it to our antievil team. Through this system of voting and reporting, users can accept or reject any piece of content, thus turning every user into a moderator. The system isnt perfect. Its possible to find things on reddit that break the rules, but its effectiveness has improved with our efforts. Independent academic analysis has shown our approach to be effective in curving bad behavior. When we investigate russian attempts at manipulating our content, less than 1 made it past the routine defenses of our team, community moderation, and down votes from users. We constantly evolve policies and since my return weve made a series of updates. These are just a few of the ways weve worked to moderate in good faith which brings us to the question of what reddit would look like without 230. For starters we would be forced to defend against anyone with enough money to bankroll. Its worth noting the cases most commonly dismissed are regarding defamation. As an open platform we would be a prime target for these enabling sensorship through litigation. Even targeted lens to 230 would create a Regulatory Burden on the entire industry benefitting the Largest Companies by placing a significant cost on smaller competitors. While we have 500 employees and a large user base, normally more than enough to be considered a large company, in tech today we are an underdog compared to our nearest competitor. Still we recognize theres truly harmful material on the internet and were committed to fighting it. Its important to understand that rather than helping either narrow changes to 230 can undermine the power of community and hurt the vulnerable. Take the Opioid Epidemic which has been raised in discussions on 230. We have many communities on reddit where people struggling with addiction can find support. With our carve out in this area, posting them may be too risky forcing us to close them down. This would be a disservice to people who are struggling. This is exactly the type of decision that restrictions on 230 would force on us. Section 230 is a uniquely american law with a balanced approach that allowed platforms like ours to flourish. While these down sides are serious and demand the attention of us in the industry and you in congress, they do not outweigh the overwhelming good that 230 has enabled. Thank you. I look forward to your question. Thank you, mr. Huffman. Ms. Citron, youre recognized for five minutes. Thank you for having a thoughtful panel. Thank you for having me and for having much a thoughtful bench with me on the panel. When congress adopted section 230 20 years ago, the goal was to incentivize Tech Companies to moderate content. Although congress wanted the internet, what it could be imagine to be open and free, we also knew that openness would risk offensive material and aim going to use their words. So, what they did was devise an incentive, a legal shield for Good Samaritans who are trying to clean up the internet. Both accounting for the failure removed so underfiltering and overfiltering of content. Now, the purpose of this statute was fairly clear, but its interpretation, the words werent. So, what weve seen are courts massively overextending section 230 to sites that are irresponsible in the extreme and that produce extraordinary harm. Weve seen the liability shield be applied to sites whose entire Business Model is abuse. So, revenge operators and sites that all they do is cue rate users videos. Interestingly, not only is it bad samaritans who have enjoyed the legal shield from responsibility, but its also sites that have nothing to do with speech but traffic in Dangerous Goods like armslist. Com. And the costs are significant, this overbroad interpretation allows bad samaritan sites, reckless irresponsible sites to have cost on peoples lives. Im going to take the case of Online Harassment because ive been studying it for the past ten years. The costs are significant, and especially to women and minorities. Online harassment thats often hosted on these sites is costly to peoples central life opportunities. When a Google Search of your name contains rape threats, your nude photo without your consent, your home address because youve been doxed, its hard to get a job and its hard to keep a job. And also for victims, they are driven offline in the face of online assaults. Theyre terrorized. They often change their names and they move. So, in many respects, the calculus the free speech calculus is not necessarily a win for free speech as were seeing diverse viewpoints and diverse individuals being chased offline. So, now, the market i think ultimately is not going to solve this problem. So many of these businesses, they make money off of Online Advertising and salacious negative and novel content, that attracts eyeballs. So, the market itself, i dont think we can rely on to solve this problem. So, of course legal reform. The question is how should we do it . I think we have to keep section 230. It has tremendous upsides, but we should return it to its original purpose which was to condition the shield on being a Good Samaritan, on engaging in what we have called reasonable content moderation practices. Now, there are other ways to do it. In my testimony i sort of draw out solutions. But weve got to do something because doing nothing has costs. It says to victims of online abuse that their speech and their equality is less important than the business profits of some of these most harmful platforms. Thank you. Chair now recognizing dr. Mcsherry for five minutes. Thank you. As legal director for the Electronic Frontier foundation, i want to thank the chair, Ranking Members, and members of the committee for the opportunity to share our thoughts with you today on this very, very important topic. For nearly 30 years, eff has represented the interests of Technology Users both in court cases and in broader policy debates to help ensure that law and Technology Supports our Civil Liberties. Like everyone in this room, we are well aware that online speech is not always pretty. Sometimes its extremely ugly, and it causes serious harm. We all want an internet where we are free to meet, create, organize, share, debate, and learn. We want to have control over our online experience and to feel empowered by the tools we use. We want our elections free from manipulation and for women in marginalized communities to be able to speak openly about their experiences. Chipping away at the legal foundations of the internet in order to pressure platforms to Better Police the internet is not the way to accomplish those goals. Section 230 made it possible for all kinds of voices to get their message out to the whole World Without having to acquire a broadcast license, own a newspaper, or learn how to code. The law has thereby helped remove much of the gate keeping that once stifled social change and perpetuated power imbalances. Thats because it doesnt just protect tech giants. It protects regular people. If you forwarded an email, a news article, a picture, or a piece of political criticism, youve done so with the protection of section 230. If youve maintained an online forum for a neighborhood, youve done so with the protection of the section 230. If you used wikipedia to figure out where George Washington was born, you benefitted from section 230. And if you are viewing online videos, documenting events real time in northern syria, youre benefitting from section 230. Intermediaries, whether social media platforms, news sites, or email forwarders are protected by section 230 just for their benefit. Theyre protected so they can be available to all of us. Theres another very practical reason to resist the impulse to amend the law to pressure platforms to more actively monitor and moderate user content. Simply put, theyre bad at it. As eff and many others have shown, they regularly take down all kinds of valuable content in and partly because its difficult to draw clear lines between lawful and unlawful speech, particularly at scale. And those mistakes often silence the voices of already marginalized people. Moreover, increased liability risks will inevitably lead to oversensorship. Its a lot easier and cheaper to take something down than to pay lawyers to fight over it, particularly if youre a smaller business or a nonprofit. And automation is not the magical solution. Context matters. Very often when youre talking about speech. And robots are pretty bad at nuance. For example, in december 2018, blogging platform tumbler announced a new ban. Shortly thereafter, tumblers own filtering technology flagged the images as unacceptable. Heres the last reason. New legal burdens are likely to stifle competition. Facebook and google can afford to throw millions at moderation, automation, and litigation. Their smaller competitors or would be competitors dont have that kind of budget. So, in essence we would have opened the door to a few companies and slammed that door shut for everyone else. The free and open internet has never been fully free or open. And the internet can amplify the worst of us as well as the best. But at root, the internet still represents and embodies an extraordinary idea that anyone with a Computing Device can connect with the world to tell their story, organize, educate, and learn. Section 230 helps make that idea a reality and its worth protecting. Thank you, and i look forward to your questions. Thank you, dr. Mcsherry. Ms. Peters youre recognized for five minutes. Thank you. Distinguished members of the subcommittee, its an honor to be here today to discuss one of the premier security threats of our time, one that congress is wellpositioned to solve. Im the executive director of the alliance to counter crime online. Our team is made up of academics, security experts, ngos, and citizen investigators who have come together to eradicate terror activity on the internet. I want to thank you for your interest in our research and for asking me to join the panel of witnesses today. Like you i hope to hear the testimony of the u. S. Trade representative because keeping cda 230 language out of the trade agreements is critical to national security. I have a long history of tracking organized crime and terrorism. I was a war reporter and i wrote a book about the taliban and the drug trade. That got me recruited by u. S. Military leaders to support our intelligence community. I mapped transnational Crime Networks and Terror Networks for special operations command, the dea, and centcom. Thats when my team discovered that the largest Retail Markets for endangered species are located on social media platforms like facebook. Founding the alliance to counter crime online which looks at crime more broadly than just wild life has taught me the incredible range and scale of elicit activity happening online. It is far worse than i ever imagined. We can and must get this under control. Under the original intent of cda 230 there was supposed to be a shared responsibility between tech platforms, Law Enforcements, and organizations like acco. But tech firms are failing to uphold their end of the bargain. They enjoy undeserved safe harbor for hosting elicit activity. Committee members, the Tech Industry may try to convince you today that most illegal activity is confined to the dark web but thats not the case. Surface web platforms provide much the same anonymity. Were tracking elicit groups ranging from mexican drug c cartels to chinese triads that weaponized u. S. Social media platforms. Now we are in the midst of a Public Health crisis, the Opioid Epidemic which is claiming the lives of more than 60,000 americans a year. But facebook, the Worlds Largest social Media Company only began tracking drug activity, drug postings on its platform last year. And within six months the Firm Identified 1. 5 million posts selling drugs. Thats 100 times more postings than the notorious dark website the silk road ever carried. Study after study by acco members and others have shown widespread use of google, twitter, facebook, reddit, youtube to market and sell fentanyl, oxycodone, and other highly addictive deadly substances to u. S. Consumers in direct violation of u. S. Federal law. Every major internet platform has a drug problem. Why . Because there is no law that holds tech firms responsible even when a child dies buying drugs on an internet platform. Tech firms play an active role in spreading harm. The algorithms well intentioned to connect friends also help criminals and terror groups connect to a global audience. Isis and other terror groups use social media to recruit, fund raise, and spread propaganda. The acco alliance among others includes an Incredible Team of syrian archaeologists. This is a war crime. Were also tracking groups on instagram, google, and facebook where endangered species are sold. Items ranging from elephant ivory to live chimpanzees and cheetahs. In some cases the size of the online markets is literally threatening species with extinction. I could continue to sit here and horrify you all morning. Illegal dogfighting, live videos of children being sexually abused, weapons, explosives, human remain, counterfeit goods, its all just a few clicks away. Distinguished committee members, the Tech Industry continually claims that modifying 230 is a threat to freedom of speech but cda 230 is a law about liability not freedom of speech. Try to imagine another industry in this country that has ever enjoyed such an incredible subsidy from congress, total immunity no matter what harm the product brings from users. It was cheaper and easier to scale while looking the other way. They were given this incredible freedom and they have no one to blame but themselves for squandering it. We want to see reforms to the law to strip immunities for hosting terror and serious crime content to regulate that firms must report crime and terror activity to Law Enforcement and appropriations to Law Enforcement to contend this data. Committee members, if its illegal in real life, it should be illegal to host online. It is imperative we reform 230 to make the internet safer for all. Gentle lady yields back. Ms. Oyama youre recognized for five minutes. Chairman, Ranking Members, distinguished members of the committee, thank you for the opportunity to appear before you today. I appreciate your leadership on these important issues and welcome the opportunity to discuss googles work in these areas. My name is katie oyama and im the global head of ip at google. In that capacity i advise the company on Public Policy frameworks for the management and moderation of online content of all kinds. At google, our mission is to organize and make the worlds information universally accessible and useful. Our services and many others are positive forces for creativity, learning, and access to information. This creativity and innovation continues to yield enormous econom economic benefits for the United States. However, like all means of communications that came before it, the internet has been used for both the best and worst of purposes. And this is why in addition to respecting local law we have robust policy, procedures, and Community Guidelines that govern what activity is permissible on platforms and update them regularly to meet the changing needs of both our users and society. And my testimony today will focus on three areas, the history of 230 and how it helped the internet grow, how 230 contributed to help take down content, and googles policies across the products. Section 230 has created a robust internet ecosystem where commerce, innovation, and Free Expression thrive while also enabling providers to take aggressive steps to fight online abuse. Digital platforms help millions of consumers find legitimate content across the internet facilitating almost 29 trillion in online commerce each year. Addressing illegal content is a shared responsibility and our ability to take action is underpinned by 230. The law not only clarifies when services can be held liable for Third Party Content but also creates the Legal Certainty necessary for services to take swift action against all content of all types. The Good Samaritan provision was introduced to facilitate content moderation. It does nothing to alter platform liability for violations of criminal laws which are exempted from the scope of the cda. The importance of section 230 has only grown and is critical in ensuring continued economic growth. A recent study found that over the next decade 230 will produce 4 to 5 million jobs. Furthermore, investors in the start up ecosystem have said that weakening online safe harbors would have a recessionlike impact on investment. Internationally 230 is a differentiator for the u. S. China, russia, and others take a different approach to censoring speech online, sometimes including speech that is critical of political leaders. Perhaps the best way to understand the importance of 230 is to imagine what might happen if it werent in place. Without 230, search engines, video sharing platforms, political blogs, start ups, review sites of all kinds would either not be able to moderate content at all or they would overblock either way harming consumers and businesses that rely on Services Every day. Without 230 platforms could be sued for decisions of removal of content on their platform such as hate speech, or pyramid schemes. We can ensure platforms are safe, useful, and vibrant for users. For each product we have a specific set of rules and guidelines that are suitable for the type of platform, how it is used, and the risk of harm associated with it. These range from clear content policies with flagging mechanisms to report content that violates them to increasingly effective machine learn that can facilitate removal of harmful content at scale before a single human user has been able to access it. From april to june, youtube removed over 9 million videos from the platform for violating guidelines and 87 of this content was flagged by machines first rather than by humans. And of those detected by machines, 81 of that content was never viewed by a single user. We now have over 10,000 people across google working on content moderation. Weve invested hundreds of millions of dollars for these efforts. In my written smi go into detail about the policies and procedures for tackling content on search, google ads, and youtube. We are committed to being responsible actors who are part of the solution. Google will continue to invest in the people and the tech following had to meet this challenge, and we look forward to continued collaboration with the committee as it examines these issues. Thank you for your time and i look forward to taking your questions. Action the. Dr. Farid, you have five minutes. Chairman, chairwoman, Ranking Members, members of both subcommittees, thank you for the opportunity to speak with you today. Technology and the internet have had a remarkable impact on lives and society. Many educational, entertaining, and inspiring things have emerged from the past two decades of innovation but at the same time many horrific things have emerged. Massive proliferation of child sexual abuse material, recruitment of terrorists, distribution of deadly drugs, proliferation of disinformation campaigns designed to sow civil unrest, incite violence, the harassment of women and underrepresented groups in threats of violence, small and large scale fraud, and spectacular failures to protect personal and sensitive data. How in 20 short years did we go from the promise of the internet to democratize access to knowledge and make the world more understanding and lightened to this litany of daily horrors . A combination of willful ignorance and growth at all costs have failed to install proper safeguards on services. The problem we face today is not new. As early as 2003 it was well known that the internet was a boom for child predators. The Technology Sector dragged feet in early or mid2000s and did not respond to the known problems at the time nor did it put in place the proper safeguards. In defense of the Technology Sector, they are contending with an unprecedented amount of data. Some 500 hours of video are uploaded to youtube every minute. Some 1 billion daily uploads to facebook and some 500 million tweets per day. On the other hand, the same companies have had over a decade to get their houses in order and have simply failed to do so. At the same time they have managed to profit handsomely by harnessing the scale and volume of content each day. They routinely and quite effectively remove Copyright Infringement and remove legal adult pornography because otherwise the services would be littered with pornography. During the testimony, mr. Zuckerberg repeatedly evoked artificial entail answer as the savior for content moderation and we are told five to ten years. Putting aside its not clear what we should do in the intervening decade or so, this claim is almost certainly overly optimistic. Earlier this year, facebook ace chief Technology Officer show cased images of broccoli from image of marijuana. Despite the latest advances in ai and pattern recognition this system is only able to perform a task with accuracy of 91 . This means one in ten times the system is simply wrong. At a scale of a billion uploads a day, the technology cannot possibly moderate content. This Discrimination Task is surely much easier than identifying a broad task of extremism and disinformation material. The promise of ai is just that a promise. We cannot wait a decade or more with the hope that ai will improve by nine orders of moog tuesday with had it might be able to contend with automatic online content moderation. To complicate things even more, earlier this year mr. Zuckerberg announced facebook is preventing end to end services. Preventing anyone, the government, facebook, from seeing the content of any communications, blindly implementing end to end encryption will make it more difficult to contend with the litany of abuses i e mum rated at the opening of my remarks. We can and must do better when it comes to contending with some of the most violent, harmful and dangerous content online. I simply reject the naysayers that argue that it is too difficult from a policy or technological perspective or those that say reasonable content moderation will lead to a stifling of ideas. Thank you, and i look forward to taking your questions. Thank you, dr. Freed. Well, weve concluded our openings. Were going to move to member questions. Each member will have five minutes to ask questions of our witnesses. And i will start by recognizing myself for five minutes. I have to say when i said at the beginning of my remarks, this is a complex issue. Its a very complex issue. And i think weve all heard the problems. What we need to hear is solutions. Let me just start by asking all of you, just by a show of hands, who thinks that Online Platforms can do a better job of moderating their content on their websites . So thats unanimous. I agree, i think its important to note that we all recognize that content moderation online is lacking in a number of ways and that we all need to address this issue better. And if not you who are the platforms and the experts in this technology and you put that on our shoulders, you may see a law that you dont like very much and that has a lot of unintended consequences for the internet. I would say to all of you you need to do a better job. You need to have an industry getting together and discussing better ways to do this. The idea that you can buy drugs online, and we cant stop that, to most americans hearing that, they dont understand why thats possible. Why it wouldnt be easy to identify people that are trying to sell illegal things online and take those sites down. Child abuse. Its very troubling. On the other hand, i dont think anybody on this panel is talking about eliminating section 230. So the question is, what is the solution between not eliminating 230 because of the effects that would have just on the whole internet and making sure that we do a better job of policing this. Mr. Huffman, reddit. A lot of people know of reddit. But its really a relatively Small Company when you place it against some of the giants. And you host many communities and you rely on your volunteers to moderate discussions. But i know that youve shut down a number of controversial subreddits that have spread deep fakes, disturbing content, and misinformation and dangerous conspiracy theories. What would reddit look like if you were legally rely for your companys decision to moderate yount units in communities . Sure. Thank you for the question. What reddit would be forcing to go to one of two extremes. In one version, we would stop looking. We would go back to the pre230 era, which means if we dont know, were not liable. And that im sure is not what you intend and certainly not what we want. It would be not aligned with our mission of bringing community and belonging to everybody in the world. The other extreme would be to remove any content or prohibit any content that can be remotely problematic. Since reddit is a platform where 100 of our content is created by our users, it fundamentally undermines the way reddit works. Its hard for me to give you an honest answer of what reddit would look like because im sure reddit as we know it would exist in a world where we have to remove all User Generated Content. Dr. Mcsherry, you talk about the speech if section 230 were substantially repealed or altered. But what other tools could congress use to incentivize platforms and encourage a healthier online ecosystem . What would your recommendation be short of eliminating 230 . Well, i think a number of the problems that weve talked about today so far, which i think everyone agrees are very, very serious. I want to underscore that are actually often addressed by existing laws that target the conduct, itself. So, for example, in the arms list case, we had a situation where what arms list the selling of the gun that was so controversial was actually perfectly legal under wisconsin law. Similarly, many of the problems that weve talked about today are already addressed by federal criminal laws that already exist and so they arent section 230 is not a barrier because of course theres a carve out for federal criminal laws. So i would urge this committee to look carefully at the laws that actually target the actual behavior that we are concerned about and perhaps start there. Ms. Peters, you did a good job horrifying us with your testimony. What solution do you offer short of repealing 230 . I dont propose repealing 230. I think that we want to continue to encourage innovation in this country. Its our core economic core driver of our economy. But i do believe that cda 230 should be revised so that if something is illegal in real life that it is illegal to host online. I dont think that is an unfair burden for tech firms. Remember certainly some of the wealthy firms should take that on. We have to run checks to make sure when we do business with foreigners, were not doing business on a terror black list. Is it so difficult for Companies Like google and reddit to make sure that theyre not hosting an illegal pharmacy . I see my time is getting way expired. But i thank you. I think we get the gist of your answer. The chairman now yields to my Ranking Member for five minutes. Well, thank you, mr. Chairman. Again, thanks to all witnesses. Ms. Oyama, if i can start with you. A recent New York Times article outlined the horrendous nature of child sex abuse online and how it has exponentially grown over the last decade. My understanding is Tech Companies are only legally required to report images of child abuse only when they discover it. They are not required to actually look for it. I understand you made voluntary efforts to look for this kind of content, how can we encourage platforms better enforce their terms of service or proactively use their store provided by subsection c2 of section 230 to take good faith efforts to create accountability within platforms . Thank you for the question and particularly for focusing on the importance of section c2 to incentivise platforms to moderate content. I can say for google we think transparency is critically important. So we publish our guidelines. We publish on youtube a Quarterly Report we show across the different categories of content what is the volume of content that weve been removing. And we also allow for users to appeal. So, if their content is stricken and they think that is a mistake, they have the ability to appeal that. So we do understand that this piece of transparency is really critical for user trust and decisions with policy makers on these critically important topics. Thank you. Ms. Kri reason citron, a numbe defendants claimed section 230 immunity in the court. Was section 230 intended to capture those platforms . I keep doing that. So platforms are solely responsible for the content. There is no user the question is there is no user generative content and its their creating the content. Thats the question, would that be covered by the legal shield 230 . Im asking is that the question . Right. No, they would be responsible for the content that theyve created and developed. So, section 230, that legal shield would not apply. Thank you. Mr. Farid, are there tools available like photo dna or copyright id to flag the sale of Illegal Drugs online . If the idea is that platforms should be incentivize to tack down blatant illegal content, shouldnt keywords or other indicators associated with opioids be searchable through an automated process . The short answer is yes. Theres two ways of doing content moderation. Once the material has been identified, typically my human moderator whether thats child abuse material, Illegal Drugs, terrorismrelated material, whatever it is, that material Copyright Infringement can be fingerprinted, digitally fringer printed, and stopped from future upload and distribution. That technology has been well understood and has been deployed over a decade. I think it has been deployed anemically across the platforms and not nearly aggressive enough. Thats one content is networks today. The second form of moderation i call the day zero. Finding the christchurch video on upload. That is incredibly difficult and requires journalists or Law Enforcement to find. Once that content is identified, it can be removed from future upload. I will point out today you can go on to google and you can type buy fentanyl online, and it will show you in the first page on the first page where you can purchase infinite nil. Thats not a difficult find. Were not talking about the dark web or things buried on page 20. Its on the first page. That, in my opinion, there is no excuse for that. Let me follow up. You say its anemic with the platforms are doing out there. Last year in this room, we passed over 60 pieces of legislation dealing with the drug crisis that we have in this country. Fentanyl being one of them. You just mentioned you can just type in fentanyl and you can find it. Okay. Because again what were trying to do is make sure we dont have the 72,000 deaths we had in this country over a year ago and with over 43,000 being associated with fentanyl. So, okay. How do we go in to the platform and say we have got to enforce this because we dont want this stuff flowing in from china, and how do we do that . Well, this is what the conversation is. So im with everybody else on the panel. We dont repeal 230. But we make it a responsibility, not a right. If your platform can be weaponized in the way we have seen across the boards from the litany of things i had in my opening remarks, surely something is not working. If i can find on google in page one and not just me. My colleagues on the table, investigative journalists, we know the content is there. Its not hiding. Its not difficult. We have to ask a question, if a reasonable person can find this content, surely google with its resources can find it as well. Now, what is the responsibility . I think you said earlier, too, is that you should enforce your terms of service. If we dont want to talk about 230, lets talk about terms of service. The terms of service of most of the major platforms are actually pretty good. Its just that they dont do very much to enforce them in a clear, consistent, and transparent way. Thank you very much. Mr. Chairman, my time is expired. I yield back. The chair now recognizes ms. Schakowsky, the chair for the subcommittee on Consumer Protection for five minutes. Thank you, mr. Chairman. Ms. Oyama, you said in one of the sentences that you presented to us that without 230, i want to see if there is any hands that would go up that we should abandon 230 . Has anybody said that . Okay. So this is not the issue. This is a sensible conversation about how to make it better. Mr. Huffman, you said and i want to thank you for, we had, i think, a really protective meeting yesterday explaining to me what your organization does and how its unique and you also said in your testimony that section 230 is a unique american law. When we talked yesterday you thought it was a good idea to put it in a trade agreement dealing with mexico and canada. If its a unique american law, let me just say that i think trying to fit it into the regulatory structure of other countries at this time is inappropriate. And i would like to just quote i dont know if hes here from a letter that both chairman pallone and Ranking Member walden wrote some time ago to mr. Lighthizer that said we find it inappropriate for the United States to export language mirroring section 230 while such serious policy discussions are ongoing. And thats whats happening right now. Were having a serious policy discussion. But i think what the chairman was trying to do and what i want to try and do is figure out what do we really want to do to amend or change in some way . So again, briefly, if the three of you that have talked about the need for changes let me start with ms. Citron on what you want to see in 230. So id like to bring the statute back to its original purpose was to apply a Good Samaritans who were engaged in responsible and reasonable content moderation practices. And we can change have the language to change the statute that would condition that were not going to treat a user or provider of Interactive Service that engages in reasonable content moderation practices as a publisher or speaker. So, it would keep the community let me just suggest, if there is language, i think wooeld like to see suggestions. Ms. Peters, if you could i think you pretty much scared us as to what is happening and then how we can make 230 responsive to those concerns. Thank you for your question, chair shchakowsky. We would love to share some proposed language with you about how to reform 230 to protect better against organized crime and terror activity on platforms. One of the things im concerned about that a lot of tech firms are involved in is when they detect ill is Illicit Activity or it gets flagged to them by users, their response is to delete it and forget about it. What im concerned about is that two things, number 1, that essentially is destroying Critical Evidence of a crime. Its actually helping criminals to cover their tracks as opposed to a situation like what we have for the financial industry and even aspects of the transport activity. If they know ill is it activity is going on, they have to share it with Law Enforcement and in a certain time frame. I certainly want to see the content removed, but i dont want to see it simply deleted and i think that the an important distinction. I would like to see a world where the big tech firms work collaboratively with Civil Society and with Law Enforcement to rout out some of these evil im going to cut you off just because my time is running out and i do want to get to dr. Farid with the same thing. So i would welcome concrete suggestions. Thank you. I agree with my colleague mrs. Citron. I think 230 should be a privilege not a right. You have to show are you doing reasonable content moderation. I think we should be worried about the small startups. If we start regulating now, the ecosystem will become even more monopolistic. The last thing i would say is the rule versus to be clear, consist and transparent. Thank you. I yield back. The chair now recognizes ms. Morris rodgers for five minutes. Thank you, mr. Chairman. Section 230 was intended to provide Online Platforms with a shield from liability as well as a sword to make good faith efforts to filter, block or otherwise address certain offensive content online. Professor sigh citron, Double Companies are using the sword enough and why do you think that is . We have been working with facebook for years. So i would say the dominant platform and folks on this panel at this point are engaging in what i would describe at a broad level as fairly reasonable content moderation practices. I think they could do far better on transparency about what they mean by when they forbid hate speech, what do they mean by that . Whats the harm that they want to avoid and examples. And they could be more transparent about the processes that they use when they make decisions to have more accountability. But what really worries me are the sort of renegade sites as well, the 8chans who foment incitement with no moderation, the dating apps that have no ability to ban ip addresses. And frankly sometimes its the biggest providers, not the small ones, who know they have illegality happening on their platforms and do nothing about it. Why are they doing that . Because of section 230 immunity. The dating app tinder comes to mind, hosting impersonations of texts to this mans home. Grinder heard 50 times from the individual being targeted, did nothing about it. Finally within they responded after getting a lawsuit, their response is our technology doesnt allow us to track ip address, but grinder is fairly dominant in this space. But when the person went to scruff, a smaller dating site, the impersonator was again posing as an individual, sending men to his home, and scruff responded right away and said we can ban the ip address and took care of it. So i think the notion that the smaller versus large by my life is there is good practices, responsible practices and irresponsible harmful practices. Okay. Thank you for that. Mr. Huffman and ms. Oyama, your Company Policies specifically prohibit illegal content or activities on your platforms. Regarding your terms of services, how do you monitor content on your platform to ep sure that it does not violate your policies . Maybe ill start with mr. Huffman. Sure. In my opening statement, i described the three layers of moderation that we have on reddit. Our companys moderation and our team. This is the group that both writes the policies and enforces the policies. Primarily the way they work is enforcing these policies at scale, looking for aberrational behavior, looking for known problematic sites or words. We participate in the crossindustry hash sharing, which allows us to find images, for example, exploitive of children that are sheared industry wide or fingerprints thereof. Next to our community moderators, these are the people who, these are users and then inappropriate for the community and in violation of our policies. We have policies against hosting, one of the points is, no illegal content. So no regulated goods. No drugs. No guns. Anything of that sort. You are seeking it out and if you find it then you get it off the platform. Thats right. Because 230 does not provide us criminal Liability Protection, so we are not in the business of committing crimes or helping people commit crimes, that would be problematic for our business. So we do our best to make sure that its not on there. Ms. Oyama, could you address that, and tell us what you do when you find that illegal content . Across youtube, we have very clear policies, we publish those online, we have you two videos that give more examples and more specific way so people understand. We are able to detect of the 9 million videos that we removed from youtube in the last quarter, 87 of those were detected first by machines. So automation is one very important way, and then the second way is human reviewers, so we have community flagging, where any user that sees problematic contact can flag it and follow what happens with that complaint, we also have human reviewers that look and we are very transparent about explaining that. When it comes to criminal activity on the internet, of course, 230 has a clean carpets, on the case of grindr, we have policies against harassment, but in the case of grindr, where there was real criminal activity, my understanding is that there is a defendant in that case, and there is a criminal case for harassment and stalking that are proceeding against him. So in certain cases, opioids again, controlled substances, under criminal law, theres a section that says, i, think, controlled substances, on the internet, sale of controlled substances on the internet, thats a provision. In cases like that, where there is actually a Law Enforcement role, we would, if there is correct legal process, then we would work with Law Enforcement to provide information under due process or a subpoena. Thank you. Okay. My time is expired. I yield back. Thank you. Ms. Degette, you are recognized for five minutes. I want to thank this panel, im a former constitutional lawyers im always interested in the intersection between criminality and free speech, and professor citron, i was reading your written testimony, which you confirmed with miss schakowsky about how section 230 should be revised to both continue to provide First Amendment protections, but also returned the statute to its original purpose, which is to let Companies Act more responsibly, not less. In that vein, i want to talk during my line of questioning about Online Harassment. This is a. Sexual harassment is a real issue that has just only increased. And the Defamation League reported that 24 of women and 63 of lgbtq individuals have experience Online Harassment because of their gender or sexual orientation. This is compared to only 14 of men and 37 of all americans of any Background Experience Online Harassment which include harassment, stalking physical threats and i want to ask you professor citron and miss peters very quickly to talk to me about how section 230 facilitates illegal activities and do you think it undermines the value of those laws and if so how . In cases involving harassment there is a perpetrator and the platform enables it. Most of the time, the perpetrators are not pursued by Law Enforcement. When you are in cyberspace it explores the fact that Law Enforcement will not understand the abuse or dont know how to investigate it in the case of grindr they had ten protective orders that were violated in new york has done nothing about it. Its not true that we can always find the perpetrator nor especially in the paces of stalking, harassment and threats. We see a severe under enforcement of law and particularly when it comes to gendered harms. Thats really where it falls to the sites to try to protect. Miss peters, do want to add onto that . There has to be something that came to a cyber restraining order. If someone is talking somebody on grindr or ok cupid or google that site can be blocked from communicating from the other. Even under section 230 the platforms ignore requests of this type of material . They have. Professor youre citron nodding your head. They do and they can especially if those protective orders are coming from state criminal law. Okay. I wanted to ask you doctor mcnerney that it becomes a problem on twitter and other social platforms. I know section 2 30 is a critical tool that facilitates content moderation. But as weve heard in the testimony, a lot of the platforms are not being aggressive enough to apply the terms and conditions. I want to ask you, what can we do to encourage platforms to be more progressive and containing issues like harassment . I imagine this hearing will courage many of them to do just that. We keep hiring hearings all the time. I understand. Im so absolutely. Many of the platforms are pretty aggressive already in the moderation policy. I agree with what many have said here today. Which is that it would be nice if they start by clearly enforcing their actual terms of service and we have a shared concern about because its enforced very consistently and its very challenging to users. A concern that i have is that if the institute which is one proposal that whatever you get a notice you have some duty to investigate. That can actually backfire for marshalling those communities. One thing that happens is that if you were to silence someone online, one day my thing to them i do is provide a Service Provider with stuff about them and they will be the ones that are silenced. Doctor farid, whats your view on that . Pardon me . Whats your view of what he mcsherry said. Theres two issues at hand here. When is a moderation and new risk overmoderating and undermoderating. We are way under moderating. We look at where we fall down and where he make mistakes and take the content that we should and we weigh that against 45 million pieces of content just last year of child abuse material and terrorism and drugs. The weights are imbalanced and we have to rebalance and we are going to make mistakes. Were making way more mistakes on allowing content than we are on not. Thank you very much mister chairman i yield back. The chair now recognizes mr. Johnson for five minutes. Thank you mister chairman and to you and chairman schakowsky for holding this very important hearing on the Information Technology for most of my adult life and social responsibility has been an issue that i have talked about. A lot. In the absence of heavy handed government and regulating and i think thats the absence of regulation and what has allowed the internet and the social media platforms to grow like they have and i hate to sound cliche but that old line from the Jurassic Park movie. Sometimes formal focus on what we can do i dont think about what we should do. I think that is where we find ourselves with some of this. We heard from some of our witnesses inaccessibility on a global audience in the internet platforms meaning thats being used for a legal and illicit purposes by terrorist organizations and the use of oil weight which is affecting our communities across the nation and in rural areas where i live in southeastern ohio. Internet platforms also provide an essential tool for legitimate communications and free safe an opening exchange of ideas which has become a vital component of modern society and todays global economy. I appreciate hearing from all of our witnesses as our subcommittee examines whether section 230 if the key really decency act will affectedly solve regulate on this light touch framework. Mr. Huffman, and your testimony you discussed the ability of not only reddits employees and its users to self regulate and removed content against read it in the community standards. Do you think other social media platforms like facebook or youtube have been able to implement some of their self regulating functions and guidelines. If not, what makes reddit unique in their way to self regulate . Thank you congressman. Im only familiar with the other platforms to the extent that you probably are which is to say to im not an expert but they are not sitting on their hands and theyre making progress. But reddits model is unique in this history in that we believe that the only thing that scales what users is users. We were talking about User Generated Content and sharing someones burden with those people the same way that this society in the United States already agreed rules about what is acceptable and not to say. Same thing exist on our platforms. By allowing the powering in our news is to enforce those unwritten rules, it creates an overall unhealthy ecosystem. Miss oyama, in your testimony discussed the possibility of determining which content is allowed on your platform including balance and respect for the platform of marginalized voices. With a system like reddit and down votes impact the visibility of those viewpoints like youtube, and do dislikes on youtube impact they videos visibility . Thank you for the question. As youve seen, musicians skipped thumbs up comes down to a video. Its one of many signals so it would be a determinative of the recommendation of the video for relevance. I really appreciate your point about responsible content moderation. I did want to make the point that the piece about hospital buoyed. We removed 35,000 videos from youtube and we did this because of 230 when. The contents or move they might be upset so break there could be cases against the substance provider for defamation and breach of contract. Service providers a large and small are able to have those policies and implement procedures to identify bad content and take it down because of the provision of 230. Okay, i got some other questions i want to submit for the record but let me just summarize with this cause i want to stay within my time. Youre going to require me to stay within my time. You know, in the absence of regulations as i mentioned in my opening remarks. That takes social responsibility to a much higher bar. I would suggest to the entire industry of the internet, social media platforms, we better get serious about this self regulating or youre going to force congress to do something that you might not want to have done. With that, i yield back. Chair recognizes mid matsui for five minutes. I want to thank the witnesses were being here today. Miss oyama and huffman your committee released a bipartisan order of social media. The report found that the russia use of social media platforms to sophie all ingenious court was in the 2016 election. What role can section 230 play in assuring that platforms are not used again with president s . Miss hit oyama and miss huffman. He 230 is important for services so like impacting citizens against an interference in the election it. Its a critical election with the election cycle coming up. We found across google in the 2016 election, due to the measures weve been able to take an ad removal there are only two accounts that infiltrated our systems. They were suspended for less than 5000 dollars in 2016. We continue to be extremely vigilant and there is a transparency report that require ads are disclosed and paid for them and show up in a library. You feel that you were effective . We can always do more but on this issue we are extremely focused oh boy he would help raise. Mr. Huffman . And 2016 we found that we saw the same fake news information on our platform that we saw on others. The difference is, on reddit, it was largely rejected by the community and the users before it came to our attention. Thats one thing reddit is good, that is being skeptical. Its rejecting falsehoods are questioning everything for better or worse. Between then and now, we become dramatically better at finding groups of accounts that are working and a coordinated or authentic matter and create with Law Enforcement. Weve seen in the past and we can see Going Forward that were in a pretty good position coming into the 2020 election. And youre written testimony you said that misinformation campaigns was on to disrupt the upcoming election. The election interference made them in a lot of people. Its more the platforms that could be doing about moderating content online. What more should they be doing about this issue now this time . Let me give you one example. A few months ago we saw Speaker Pelosi make the rounds and the response was interesting. Facebook said we know its fake but were leaving it op, were not in the business of telling the truth and that was not a technological problem. It was not satire it was not comedy it was meant to discredit the speaker and i think fundamentally we have to look at the rules in the fact that if you look at facebooks rules you cannot push things that are misleading or fraudulent and its a clear case where the technology worked and the policy was ambiguous and to youtubes discredit they turned it down and looked to discredit and didnt even respond. In some cases theres a technological issue a more often than not there was not enforcing belittles that are in place. Thats a decision that they made . Okay. Miss oyama, what do you think about what mr. Farid just said . Ill respond. There are two aspects of this. First, specifically towards reddit, we have a policy towards impersonation so a video can be used to manipulate people or our service will be on this information and its also raises questions about veracity of the things that weve seen here and prompts important discussion. The context about whether video like that stays up or down on red dit is difficult as a decision. I will observe that we are entering into a new era where we can manipulate videos. Historically were able to manipulate text and images with photoshop and now videos. Not only do the platforms have a responsibility but as a society have to understand that the source of materials like which publication is critically important. There will come a time, no matter what my detector say where will not be able to detect that sort of thing. Exactly, miss oyama you have a few seconds. You mentioned we do have a policy against practices but there is ongoing work that needs to be done to better identify deepfakes. Even comedians sometimes use them but political context that could undermine democracy will end up with data sets or researchers that was our technology that can better detect that whats manipulated for those policies. I have a lot more to say but you know how this is. Anyway, i yield back the rest of my time, thank you. The gentlewoman yields back and we now address kinzinger. The last line of questions one of the thing is our billeted to have free speech and share opinions but can also be something that is a real threat. I thank the chairman for yielding and i think its safe to say that every member of congress who has a plan on what to do about this section 230 and the Communications Decency act but we can agree that the hearing is warranted. We need to have a discussion of intent on whether the companies that will enjoy these Liability Protections and ill state upfront that they appreciate the efforts of certain platforms over the years to remove and block unlawful content. I also say its clearly not enough and i think that the status quo is unacceptable. Its been frustrating for me in recent years that variations of my name have been used by criminals to defraud people on social media and this goes back ten years. I think we could approach in the fifties to hundreds given what we have just been talked about. The scams are increasingly progressive and not only brought up in the hurrying with Mark Zuckerberg last year but in the summer which will go to protect his users. I have a question that sources indicate that 2018 we reported hundreds of millions of ours lost on these scammers, including 143 million through romance scams. Many people have gone through more were important for platforms to verify you the user authenticity. To mr. Huffman and miss oyama, what do your platforms you to verify the authenticity of User Accounts . Thank you for the question. Again, two parts to my answer. The first is on the scams themselves. My understanding is that youre referring to scams that target veterans in particular we. Have a number of veterans communities. Around support and shared experiences and like all of our communities they create their own rules, and in this community has created worlds that as fundraising in general. The community and members of those communities know that they can be targeted by this scam in particular. That is the nuance that is really important that highlights the power of our community model. As a non veteran, i might not have that same sort of intuition. In terms of what we know about our users. Reddit is different from our peers is that we dont require to share the real world identity with us. We know where they register from and what i peasy use and maybe the email address but we dont force them to reveal their full name or their gender. This is important because on this and reddit there are communities that discus sensitive topics in the very same better in communities with a drug addition communities or for parents who are struggling as being parents. This is not something that would not go into a platform on facebook to say that i dont like my kids. I dont mean to cut you off but i want to get to miss oyama. Very sorry to hear that that happened to you congressman but until we have the policy against other nations if you were to see proceeding you are a user saw that theres a way that you could submit and you can have a government id but that would result in the general being struck on search. Spam can show up across the web searches and the index of the web which are relevant information to our users every single day on search, we suppress 19 billion lengths that are spam that can be scams to defend the users. Thats something called a risk engineer can kick out our fraudulent accounts in the system. Its not im not upset about is kinzinger the worst congressman ever thats understandables for some people. But when you have my case somebody as an example in multiple cases flew from india using her entire life saving because thought they were dating for a year. Not to mention all the money that she gave to this perpetrator and all these other stories. One of the biggest and most important things of people need to be aware if you have somebody over a period that were cheating you and never authenticating that is probably not real and miss peters, what are the risk of not being able to trust other users online . There are multiple risk but i want to come back to the key issues with that if it solicit, the sites should not be required to hand over data to Law Enforcement to work proactively with Law Enforcement. Weve heard a lot today from the general from reddit two better moderate. Some of the members are able to go online the other day type in search for by fentanyl online and came up with many results, same for by aderal online or by admiral for cheap without description prescription. Im not talking about a super high bar on your platform and it doesnt seem too hard or do have automatically direct to a site that would advise you to get counseling for drug abuse. We are not trying to be the thought police or, tried to protect people from organized crime and this kind of activity. Ok thank you, i will yield back and other questions all submit, thank. You determine yields back i want to say that i think he is the worst member of congress. laughs i dont even think youre at the very bottom here adam, youre not that bad guy. We now recognize miss castor for five minutes. Thanks to all of our witnesses for being here today. Id like to talk about the issue of 230 in this horrendous act tragedy in wisconsin a few years ago in the arms list. Com where a man walked into a salon where his wife was working and shot her dead in front of their daughter and killed two others. And then he killed himself. This is the type of horrific tragedy thats all too common in america today. I inaudible think he misspoke because you set that was all legal but it wasnt because two days before the shooting, there was a temporary restraining order issued against that man. He went Online Shopping on arms list. Com two days after that was issued in the next day he had his murder spree. What happened is arms list knows that they have domestic abusers and got felons and got terrorist shopping for firearms and yet theyre allowed to proceed with this. Earlier this year, the Wisconsin Supreme Court ruled that arms list is immune. Even though they know that they are perpetuating illegal content and these kinds of tragedies. They said the Wisconsin Supreme Court said they were immune because of section 230. They basically said it did not matter that arms list actually knew or intended its website with facilitated legal firearms sales and section 230 still granted immunity. Then, miss peters, youve highlight it is not an isolated incident were, talking about child sexual abuse content and illegal drug sale, it is gone way too far. I appreciate that you all will propose some solutions for this. Doctor citron, you highlighted a safe harbor if Companies Use their best efforts to moderate content then they would have some protection, but how would this work in reality . With this be left up to the courts in this type of liability with all students it speaks to the need for very clear standards from congress i think. Yes it would. Thank you so much for the question. How would we do this . It would be in the courts . It would be in the initial motion to dismiss the company, whoever is being sued question would be reasonable about this, content moderation with law. Not with regard to any one piece of contender activity and it is true that it would be a forcing mechanism for this motion in federal court to have companies explain what constitutes reasonableness. I think we get come up with some basic threshold of what we think is a reasonable content moderation and might describe as technological due process and accountability and it is having a process with clarity on what it is you prohibit and its going to be case by case and contacts by contacts. Because whats reasonable response to a deep and identical that might work on deepfakes. It is going to be different from the kind of advice i would give to facebook, twitter and others out to constitute a threat and how to figure that out. We would think about doctors farid testimony. It wouldnt be in the Public Interest if it is explicit and illegal content that they wouldnt wind up as an issue of fact in a lawsuit. What do you think doctor farid . Is the illegal content on mine there really shouldnt be a debatable question . Im a mathematician by training so i dont know why you are sequestered but i completely agree with you. What weve seen over the years and saw the polling for dna is the Technology Company that is muddled up in the gray area. We had a conversation with child abuse and what happens when its an 18yearold or what happens when its a sexually explicit and those are complicated questions. Theyre really clear cut bad behavior were doing awful things to kids as young as two months old. Ill just highlight to the witnesses and there is also an issue of a number of moderators to go through this content and the verge is having a horrendous story of facebook moderators it caught my attention, because its one of the places in tampa florida. Im going to submit followup questions about the moderators and some standards for that practice and i encourage you to answer, thank you and i yield back. The gentle lady yields and now the chair recognizes the chair from illinois you. Im sorry i missed a lot of this cause i was upstairs but in my 20 years of being a member ive never had a chance to address the same question into different panels on the same day. It was an interesting convergence upstairs where we were talking about vaping in underage use and whats the product. I was curious when in the Opening Statements here someone and i apologize someone mentioned two cases and the one was dismissed because they really did nothing. And one who tried to be the good actor got slammed, i dont know about slammed, but i see a couple of heads. Miss citro, n can you address that first. Youre shaking at the most. Those are the two cases that affected the rise in section 230 and who went with chris cox about that this was a pair of decisions about that if you do nothing youre not going to be punished for it but if you try and moderate that it heightens your responsibility. No good deed goes unpunished. Thats why were in agreement about that today in many respects. If i tie this to whats going on upstairs. If someone uses a platform to encourage under age he vaping with unknown nicotine content and then decides to clean it up because of the way the laws are right now, this good deed which most would agree is probably a good deed way to go punished. Now we have section 230 and thats why we have section 230. They are encouraged as long as theyre doing it in good faith under section 230 c2, they can remove it and theyre good samaritains. That is the benefit of it and is there fear in this debate that we had earlier and we had comments from some of our colleagues in the usmca debate that part of that would remove the debate of 230 and would fall back to a regime in which the good deed person could be punished. Is that correct . Everyone is shaking their head mostly. Miss peters, youre not, go ahead. Just turn your mic on. We need to keep the 230 language out of the trade agreements. It is currently an issue of a debate here in the United States and its not fair to put that into trade agreements and will make it impossible or make it harder. Dont get me wrong i want to see this passed as soon as possible without any encumber word but it doesnt happen. Im not a proponent of trying to delay this process, im trying to work through this debate and the concern upstairs to those of us we believe in legal these products or have been approved by the fda in about a black market operation that will use these platforms to sell to underage kids. That would be how i would tie these two hearings together. When we had the facebook hearing a couple years ago, i referred to a book called the future computed, which talks about the ability of the industry to set those standards. When to this across the board, whether its engineering of the hitting an error throwing equipment or we have this that comes together for the good of the whole and say here are our standards and the fear is that if this sector doesnt do that, then the heavy handed government or to it. But will cause a little bit more problems, doctor farid, youre shaking your head. We say we have to do better because, if we dont do it someone else will do it. That would be a nightmare. Part of that book talked about fairness and reliability and transparency and accountability. I would encourage the industry and those who are listening to help us move in that direction on our own before we do it for them. With that mister chairman i yield back my time. The gentleman yields and recognizes the chair for five minutes. Its very interesting testimony and jarring in some ways. Miss peter, the testimony is particularly jarring. Have you seen any Authentic Office of weapons of mass destruction being offered for sale online . I have not personally but we certainly have members of our alliance of tracking weapons activity. What is more concerning to me anyway is that the number of illegal groups from inaudible designated groups to alqaeda will retain web pages and look to their twitter and facebook pages from knows and run fundraising campaigns. There are theyre interested in the weapons of mass destruction issue. It is inside those groups which are the epicenter of Illicit Activity. It is hard for us to get inside those. Weve had an undercover operation to get inside some of them. Mr. Farid you talk about retention in Tech Companies between the motivation of maximizing the amount of time online and, under platforms and on other hand. Content moderation. Can you talk about that briefly please . We talked about it but 230 there is another tension point here or another thing that is the underlying Business Model today which is not to sell a product. You are the product. In some ways, thats where a lot of the tension is coming from because the metrics we use in these companies for success is how many users and how long they have their platforms. You can see why that is fundamentally the intention with moving users and removing content. The Business Model is an issue in the way we deal with privacy of user data is also an issue here. If the Business Model is monetizing new data then i need to feed you information rabbit hole effect. There is a reason why if you start watching certain types of videos of children or conspiracies or extremism, we are fighting more and more and more of that content down the rabbit hole. There is real tension there, it is the bottom line and its not just ideological, were talking about those profits. Would you like to add to that . Anne here thank you many, of these issues that were discussing today whether it is harassment, extremism, its important to remember the positive and the objective of the potential for the internet. On youtube, weve seen it gets better than life seen counter messaging in a program called creatives for change who are able to create content for youth to counter these extreme messages. Its good to remember that section 230 was borne out of this committee and out of this policy and relevant for Foreign Policy as well will be in the u. S. Cia with these free markets that are responsible for 172 billion dollar surplus that the United States has and its critically important for Small Businesses to build a moderate content and to prevent censorship from other more repressive regimes abroad. This is hard to restrain yourself with the brief answers and i understand that. Companies could be doing more today within the current Legal Framework to address problematic content. I like to ask each of you very briefly what you think could be done today with the best tools to moderate content. Very briefly please. For us, the biggest challenge is evolving our policies to meet new challenges. As such, weve evolved or policies a dozen of times, and we continue to do so into the future. For example, reason ones for are expanding our harassment policy and pornography. Undoubtedly, deep fake pornography wasnt even a word a few years ago but there are new challenges in the future and being able to address them is really important. 230 gives us the space to adapt to these challenges. It enables us to respond to changing threats and its not going to change we, can have a checklist right now and i would encourage companies to not only have policy but be clear about them as we have them. Them now to the doctor just issue mcsherry. The issue with me with the angle is terrifying. That means practical matters for Small Businesses are a lot of litigation risk as we tried to figure out what counts as reasonable. To your question, one of the crucial things we need and if we want better moderation practices, not have users to be treated just as products is to incentivize alternative business laws. We need to make sure that we clear space in this competition so when a given site is behaving badly such as grindr, people have other places to go with other practices. You can go to other sites that are encouraged to develop and involve which will make Market Forces downward and me to make that work. My time now wolf, i will yield to the gentle lady from indiana. Thank you mister chairman, thank you so much for this very important hearing. Doctor farid, to set the record recently, the reason im asking these questions, i am a former u. S. Attorney and involved in the climbs against Children Task force. We did a lot of work from 2001 to 2007. There was a deepfake pornography was not a term at that time. We certainly know that long force man has been challenged for decades in dealing with pornography over the internet, and yet i believe that we have to continue to do more to protect children and protect kids all around the globe. A concept or a tool, photo dna was developed a long time ago to detect criminal online child pornography and yet it Means Nothing to detect that legal activity with the platforms that dont do anything about it. Weve been dealing with this now for decades. This is not new and yet we have now have new tools for photos dna which is a matter of tools or effort or how is it that it is still happening . Doctor farid . Its a source of incredible frustration and me make photo dna back in 2008 with microsoft. From an industry that prides itself on rapid and aggressive development, theres been no tools in the last decade that have gone beyond photo dna and that is pathetic. It is truly pathetic when were talking about this kind of material and upright itself on saying were going to use tenyearold technology to combat some of the most gut wrenching and heartbreaking stories online. It is totally inexcusable it, is not a technological limitation, we are simply not putting the effort nto developing these tools. Let me just share and weve watched some of these videos and it is something that you never want to see and you cannot get out of your mind. Im curious, miss oyama how, is it that were still at this place . Thank you for that question. At google this not true at all. We have never stopped working on prioritizing this. We can always do better, we can constantly adopted new technologies. We had one of the first one which was enabling us to create digital fingerprints of this imagery and prevent it from ever being uploaded on youtube and there is a new tool that we have called the api and we are sharing it with others in the industry and ngos with some of that have resulted in the increased which is the type of content to going to continue to be a priority. I want to be clear, from the very top of our company, and we need to be a safe and secure place for parents and children. We will not stop working on this issue. Im im very pleased to hear that there have been advances and that youre sharing them and that is critically important. I will say, the Indiana State Police captain truck was testifying before the commerce recently told me that one of the issues that they were working with those companies which was that he called minimally compliant. He says that Internet Companies are not preserving content which can be used for investigation of law if it makes the companies aware of these materials which are automatically flagged that content to review without checking if its truly objectionable or not. Have any of you have thoughts specifically on his comment . He is an expert. Do any of you have thoughts on how he balances a Law Enforcement and critical need . They are saving children all around the globe. Miss peters, without restricting a companys immunity from hosting concerning content. I feel like if Companies Get fines or some punitive damage every time there is a list of content will see a lot less illicit content very quickly. If its illegal in real life it should be illegal to hold posted online. That is a very simple approach that i think we can apply worldwide. I have a question particularly because asked Mark Zuckerberg this relative to terrorism and to recruitment in isis and now we need to be more concerned about isis. I understand that you have teams of people taking on how many people on your team have been . Dedicated to removing contents, writing our policies about 20 of our company. About 100 people. Missed oyama . More than 10,000 people working on content. That actually removed content . That are involved in the content moderation, development of the policy. How many people are on the team that actually do that work . Im happy to get back to you. Thank you. And i yield back. The lady yields. I would like to introduce to the for the record. The chair recognizes the gentleman from new york, mr. Clarke, for five minutes. I thank our chairman and our chairwoman and Ranking Members for having this subcommittee hearing today. And fostering a healthier continental protect consumers and i introduced the first house bill on Deepfake Technology can be called the deepfake accountability act which would regulate fake videos. Deepfakes could be used to impersonate all the candidates and create fake revenge porn and theater the notion of what is real. Miss oyama and mr. Hoffman, whether the implications of section 230 on your deepfake policies . Ill go. Thank you for the question. I tihnk with tmost with most of our peers around the same time, they prohibit the deepfake pornography on reddit because we saw that as a new emerging threat and get ahead of as quickly as possible. The challenge with this is the challenge you raise which is the increasing challenge of being able to detect what is real or not. This is where we believe that our model actually shines. By empowering our users and communities to have every piece of cloth that which is highlights things that are suspicious not just videos in images but news and resources. I do believe very strongly that we as a society and not just as platforms but in addition to developing the fences against this manipulation. It is only going to increase. Thank you. On youtube, our overall policy is a policy against these practices and theres instances where weve seen these deepfakes. The Speaker Pelosi video is one example where we identified that and he was removed from the platform. As for search and for youtube, surfacing authoritative Accurate Information for our business and long term business objective. I would agree with what mr. Huffing said. One of the things that were doing is investing deeply in the academic side and the Research Side and the Machine Learning side to open up the data sets were these are deepfakes and get better up being able to tie and defy and we also have a revenge porn pocket policy for victimized by that and expand that to include synthetic images or other images. Miss citron, can you explain the ways of section 230 in the revenge. The activities that weve seen on youtube are precisely the kinds of activities that are proactive in clear legality but the real problem is that the folks at the table with the slab that had a report that eight out of the ten biggest porn sites have deepfake sex videos and some sites now that have their Business Model with samesex videos and 90 of those videos are involved. Section 230 provides them immunity. Because its users. Does the current immunity structure reflect the unique nature of this threat . None of them far section 230 as its devices at its best and incentivize the nimbleness that we are seeing in these dominant platforms but the way the language is written under section 230 c1, it has immunity on being responsible and reasonable. You have these outliers that cause enormous harm because it could be any search of your name that theres a video to deindexed and its findable and people will contact you as terrifying for victims. Its outlier companies and their Business Model is abused and section 230 is what they point to and they say sue me, too bad so sad. That is the problem. Very well. One of the issues that has become an existential threat to Civil Society is hate speech and propaganda on social media platforms. Miss oyama, if programs were removed would it change their incentives around moderating such peach . Thank you for the question, i think its a really important area to show the power and importance of this as you know there are First Amendment restrictions on government regulation of speech, there is additional responsibility for Service Providers like us in the private sector to step up, and we have policy against hate speech, incitement to violence is prohibited, speech targeting hate a specific groups for attributes based on race, religion, status, age, the takedown that we do every single quarter through automated flagging and Machine Learning or human reviewers are lawful and possible because of 2 30, when we take down content sam is taken down and so they can come back to any Service Provider, they may sue them for defamation or other, things i think looking at the equities of the Small Business interest in this case to be really important as well because i think they would say was even more deeply reliant on this flexibility and the space to innovate new ways to identify that content and take it down without fear of an mitigated legalization or uncertainty. Thank you i yield back madam chairwoman. General lady yields back and now mr. Walberg you are recognized. Thank you i appreciate the chairwoman for being here, this head home for a lot of us as we have discussed here, the internet is such an amazing, amazing tool it has brought about trade and evasion, connected millions of people in ways that we would never have thought of before and, i mean truthfully we look forward to what we will see in the future and the issues we have to wrestle with. Earlier this year i was happy to write for my district to the state of the union as my guest to highlight her good work that she is doing in my district and surrounding areas to help combat cyberbullying. It its very she is very comprehensive as a young person, who understands how much that is going on and having a real impact and high schools and colleges right now as a result of her experience in trying to attempt to make some positive things out of it after she almost committed suicide, thankfully it was not successful. She shine a light on that, so mr. Huffman and mr. Katherine oyama footer they doing to address cyberbullying on your platforms. Just two weeks ago we have dug our policies around our platforms, its one of the nuance challenges that we have because appears in many ways. One of the big changes we made is to allow harassment with reports not just from the victim but from third parties, if someone else sees instances of harassment they reported to our team so we can investigate, this is a nationwide achievement particularly on our platform when people come to us in times of need, for example a teenager struggling with her own sexuality has no place to turn, maybe not their friends or family so they come to a platform like ours and situations are people who are having suicidal thoughts come to our platform and it is our First Priority regardless of the law even though we fully support lawmakers in this initiative to make sure that those people have safe experiences on read it so we have made a number of changes and will continue to do so in the future. Mrs. Oyama. I think you for the question on youtube harassment and cyberbullying is prohibited so we use our policies to help us and force either through auto detection or community flagging and we will be able to identify that at last quarter we removed 35,000 videos under that policy against harassment and bullying and i just want to echo mr. Huffmans perspective that the internet and contents varying is very valuable and can serve as a lifeline for a victim of harassment or bullying and we see that all the time when someone is isolated in their school or reaching out across borders to another state or to find another community has really created a lot of hope and we also want to impact on and those health resources, content like that. I am glad to hear that people are continuing to help us as we move forward don in this. Who is a network has come a long way and will serve as the next potentially illegal activity, this has demonstrated that was come a long way, given that google was able to identify such activity, why would it not just take down the content question . That was for mr. It is true that on our ad system we have a risk engine so we prohibited illegal content and there is many different policies and there are truly an adds a year stricken out of the ad network for an. So you are taking them down. Absolutely before theyre able to hit any page, i think it is very squarely in line other business interests we want advertisers to feel that our network and platforms are safe, they want, our advertisers only want to be serving good adds to good content. One final question i understand that google offers a feature that would automatically take it down and uploaded both the google charges a fee for, this can this technology to apply to other content and why doesnt go offer this tool for free . Thank you for that question we do have content idea which is or copyright Management System it is automated, we have partners across the music industry, film, publishers are part of it and it is part of our Partner Program so it is offered for free and it doesnt cost the partners anything, its a revenue generator so last year we sent three billion dollars best on content id claims of cooperated material that they claimed, they were able to take the majority of the ad revenue associated with that and it was sent back out to, them that system of being able to identify and attacked algorithm me content and then set controls whether it should be in the entertainment space perhaps monetized in the case of violent extremism absolutely blocked it is something that is part of much of you to. Thank you i yield back. That is thank you madam chair i do want to think that you Ranking Members of these committees, and i want to thank the witnesses for your attendance as well, it is been very informative, even if youre not able to answer all the questions, and this is not the first time our committee has examined that the social media and internet can be a source for innovation and human connection, which we all enjoyed or making those kinds of connective sizzling as they are positive, and also a crime in criminality. I think people are experts in their fields and i appreciate you hearing all of what you have to say. Section he 230 has been reviewed and what changes should be considered. And i think a lot needs to be discussed to understand the full scope of what section on 230 cover, is for cyberbullying or hate speech on youtube or elsewhere in the elicited transaction of substances and i think the question today is twofold, first we must ask if content moderators are doing enough in second we must asked if a congressional action is required to fix these challenges, john i think it is essentially the second question that we are really facing today, and after reviewing the testimony you submitted that we have some differences of opinion on whether the second, section two so i would like to ask everyone the same question and this is probably the easiest question to answer on the most difficult because it is exceedingly vague, what is the difference between good and bad on content moderation and look like . Ill start with you mr. Huffman. Thank you for that philosophically impossible question, but i do think that there is a couple easy answers and i hope that everyone on this panel would agree with, the content moderation is ignoring the problem and that was the situation, situation and preshow 30 and that was what we were facing in the senate, i think there are many forms of this moderation, what is important to us our rented is twofold, one empowering our users and communities to set standards of discord in their communities and amongst themselves, we think this is only the real solution, and what 230 provides us, which is to look deeply in our platform to investigate and you sam finesse and nuance when we are addressing these challenges. Thank you. What makes content bad or what makes. Moderation. Whats the difference between good and bad content moderation. Okay. So that is what we are talking about. Yeah but it proceed the questions why we are here, what kind of harm to get us to the table today and why we should be even talk for changing section 2 30 and i would say what is bad or incredibly troubling is where the sites are permitted to have an entire Business Model which is abuse and harm, so that is the worst of the worse and insights that induce and solicit illegality and harm and that is to me the most troubling. That is the problem and then the question is how do you deal with the problem. I have some answers for a weekend submit them. I want to get them and rating are possible. Thank you for the question. I actually think it is a great question and i think some of these Civil Liberties on a lion our primary goal and i think good content moderation is precise, transparent and careful. But we see far too often is that in the name of content moderation and making sure the internet is safer everybody actually all kind of valuable unlawful conversations are taken off line, there is details in my testimony but i will point to one conversation. We are attempting to document word atrocities but they are often flagged as violating terms of service because of course they contain horrible material, but the point is to actually support political conversations and it is very difficult for the Service Providers to tell the difference. Thank you, miss peters. If its illegal in relief it has to be illegal or lion air, content moderation has to focus on illegal activity, i think there has been a little investment in technology that would improve this for the platforms, precisely because of section 2 30 communities. Im im sorry i said a roddick question but i would like to get response from the final two witnesses in writing you please, thank you and i yield back. The gentleman yields back and now we will recognize mr. Carter for five minutes. Thank you madam chair in all of you for being, here i know that you all understand how important this is, i hope and i believe the wall take it seriously, thank you for being here and thank you for participating in this, im gonna start with you, in your testimony you pointed out that there is clearly quite a bit of illegal content that the Online Platforms are still hosting, for instance illegal pharmacies where you can buy polls without a prescription, terrorists that are profiteering from all sorts of artifacts and also products from endangered species and then it even gets worse, you mentioned in the sale of human remains and child exploitation, i mean just gross things if he will. How much effort you feel like the platforms are putting in to containing this and stopping this . Well it depends on the platform but that is a very good question and i would like to respond to the question to you went to the committee. What was the last time anybody here saw a on facebook, taking genitalia of these platforms, they can keep child sexual abuse off these platforms, the technology exists, these are policy issues whether it is the policy to allow, video of nancy pelosi on or policy to allow pictures of human in general to dalia. I get, it i understand. Do you ever go and meet with them and express this to them . Absolutely. How are you received . We are typically told that the firm has intelligent people audited, that they are creating a high and that ai will work. What we have presented evidence of specific identifiable Crime Networks and Terror Networks we have been told that they would get back to us and they dont, that is happened multiple times. Are you ever told that they dont want to meet with you. No we usually get meetings or recalls. You feel like you have a Good Relationship that the effort is being put forward . I dont think the effort is being put forward. You see that is where i struggle, you know im doing my best to keep the federal government out of this, i dont want to stifle innovation and im really concerned about that but at the same time look, we cannot allow this to go on, if you dont do within your brand of force us to do it for you and i dont want us to do that. Its just as clear is that, miss peters you also said in your testimony that you are getting funding from that state department to map wildlife supply chain and that is when you discover that there was a large Retail Market for endangered species that exists on some platforms like facebook and wechat, any of these platforms made a commitment to stop this and if they have is a working . Is he getting any better . That is a terrific example to bring up, a number of tech firms have joined a coalition and have taken a pledge to remove endangered species content by 2020, im not aware that anything has changed and we have researchers going online and wildlife markets on their time. All right im going to be fair and im going to let me, im sorry i can see that far, im going to let you respond to that. We can always do more, i think we are committed to always doing more. I appreciate, that i know that, i dont need you to tell me that, i need you to tell me that we have plans in place. Let me tell you what we are doing in the two categories and you mentioned, wildlife and endangered species prohibited from maps, we are part of the coalition. The National Epidemic that you mentioned for opioids, we are hugely committed to helping and putting our part to combatting this epidemic, so there is an all night component in an often component, the online component researcher shows that lesson 0. 05 of misuse of opioids originates on the internet, what we have done especially with Google Searches work with the fda so that they can send us a warning letter if they see that there is a search for a rope pharmacy. Then there is also an important offside component, so we work with the da on prescription take back today, we see this on google maps, happy to comment. I would like to see you in talk to further about this, mr. Huffman i will give you an opportunity because my staff has gone on read it and they have googled if you will do or search for Illegal Drugs in and comes up and i suspect youre gonna tell me the same thing, we are working on it, and we almost have it under control what is still coming up. I have a slightly different answer if you will indulge, me first of all it is against our rules to have controlled goods on our platform and its also illegal, 230 doesnt give us protection against criminal liability, we do see content on our platform like that. So if you look this up with a search bar, including your own emails, im sure you would find a hit for admiral, that is the case on read it as well, that kind of content that is come up today is banned then gets removed by filters but there is a lack between something being submitted in something being removed, naturally thats how the system works, that said we need to take this situation very seriously so the technologies i continue to improve along these lines and that is exactly the sort of ability that 2 30 gives, us the ability to look for this content and remove it, now to the extent that you and your staff on this content specifically and to the extent that it is still on our platform, we would be happy to follow up later because it shouldnt be. You know my sons are grown now but i feel like apparent having to pleading with my child, again please dont make me have to do this, thank you madam chair, i yield back. The gentleman yields back and now i recognize congresswoman kelly for five minutes. Thank you madam chair and thank you for holding this important hearing, them tended purposes of section were to moderate content under the, the good provision of this was intended in good faith for Research Actions and availability for materials and that the provider and user seems to be rude and excessively violent and harassing otherwise objectionable whether it was constitutionally protected. Where Congress Section 230 was amended to make the platform viable for any activity related to sex trafficking, in the past some have chris tonight the law for being too ambiguous, and part of my work to this committee i share this, i have sought to work with stakeholders to protect this, while allowing innovators to innovate, and we are hoping for a more consumer friendly internet and it is my hope that our discussion will set the standards for doing this responsibly, professor citron in your testimony you discuss giving platforms if they can show their content moderation practices writ large are reasonable, as the chairman has reference how she had Companies Know where the lion is or if they are doing enough, where is outlined . In this reasonableness, it matter is set there is certainly some baseline presumptions about what would constitute reasonable content moderation practices and that includes having that, there are some that dont engage in an all and in fact they dont engage in moderation and encourage abuse but there are some baseline, i think academic writing for the last ten years and what ive done for those companies is this baseline where we have seen our best practices but naturally that is going to change, depending on the challenge, so we will have different approaches to different new and evolving challenges and that is why the reasonable approach that preserves the liability shields but it does it in exchange for those efforts. Would you agree that any changes we make we have to ensure that it is a further ambiguity. Justify it may, what was disappointing to someone who helped some officers work on the language, if we included the language unknowingly facilitated it would so my biggest disappointment was unfortunately how he came out because we do see, we almost see ourselves back where we have the initial cases and either we are seeing ray overly aggressive responses to responses online which is a shame or we see the doing nothing. The way people communicated is changing, information can start on one platform into another and go viral very quickly, the 2016 elections show how false information can spread and how it could be to deter different populations, offensive content is sharing groups then gone to a wider audience, miss peters what do you believe is the responsibility of Tech Companies to monitor and proactively remove content and spread it before being flagged by users. I believe that Companies Need to moderate and remove this when concerns and it is clearly an illegal activity, if its illegal and real life it out to be illegal to hosted online, drug trafficking, human trafficking, Serious Organized Crime and designated terror group should not be given space to operate on our platforms, i also think that city ages already needs to be revised to provide more opportunities for law in state and local Law Enforcement, to have the legal tools to respond to listed activities, that is one of the reasons that he was brought up. So missed oyama, and huffman what are you doing to stop the spread of misinformation content, are their flags that pop up . If the same content is shared 2000 are hundred thousand times. Yes, thank you for the question, on youtube we are using machines and algorithms once a content is identified and removed our technology prevented from being uploaded but youre important points about working across platforms and Industry Collaboration a good example would be the Global Internet for him to counter terrorism, we are one of the founding members and leaders in check of, that one of the things that we saw during the christ shooting was how quickly this type of content can spread and we were, we were grateful to see that last week some of the crisis protocols we put into place kicked in, so there was a shooting in germany and a piece of content that appeared on which ethnic companies were able to engage in the crisis protocol there was something that spread across the companies and did to block it. And im out of time. The general lady yields back the, and my first question is for doctor make cheri, i understand in the past theyve awkward legislation, good trade deals specifically for begging language to see this statute domestically, do you see this and trading remains is to ensure that we may not revisit the statute . Now. Okay, thank you very much. Then what i would like to do madam chair, i would like to ask the block pose from january 23 2018 by a Jeremy Malcolm and ensure that into the record. Without objection. Thank you madam, chair appreciate it, the next question is for mr. Off man and miss oyama. I wanted to say right, is that okay . Okay thank you. A question Mark Zuckerberg about how soon illegal opioid ads would be removed from their website, his answer was that the ads would be reviewed when they were flagged by users as being illegal or inappropriate, this of course is a standard answer in the social media space, however mr. Zuckerberg also said at the time that Industry Needs to and i quote, build tools that proactively go out and identify as for opioids before people have to flag them for us to review and that ends the quote, this was significantly in my opinion cut down at the time and illegal and would be on their website. And you can see mr. Half men and miss oyama, it has been a year and a half, this is an epidemic and people are dying, im sure you agree with this, has the industry been actively working on Artificial Intelligence flagging standards that can automatically identify this and what is the status of this technology and what can we expect in implementation if they have been working on it, whoever would like to go first is fine, mr. Huffman. So red it is a little different, our ads go through a strict human review process, making sure that not only are they on the right side of the content policy that prohibits the buying and control of substances but also have a much more strict ad policy, which has a much higher bar to cross because we dont want we do not want ads that caused controversy on our platform. Okay, but we have to be proactive as far as this is concerned and mr. Zuckerberg indicated that is the case, you know these kids are dying, people are dying and we just cant stand by and have this happened, have access to these, in most cases periods and drugs, different types of drugs, but missed oyama, would you like to comment please. We certainly agree with your comments for the need of proactive effort, on google ads we have something called a risk and she had that help us identify if an ad is bad when it comes into this system, from last year we kicked out 2. 3 billion ads for these policies, for any prescription that would show up in an ad that is also independently verified by independent group called scribd. That would also need to be verified by that and then of course in the specific cases of opioids that is controlled under federal law so there is a lot of important work that we have done with the da, the fta, even with pharmacies to help them promote things like take back your drugs day where people can take opioids in and drop them off so theyre not misused later on, one of the things that we have seen is the vast majority, of more than 99 of opioid misuse happens in the off line world, so from a doctor that is prescribing in or a Family Member or friend so using technology to also educate and inform people that might be potentially victimized from this is important in the ad space. Okay, how about anyone else on that panel, with the like to comment, is the doing enough . I dont think the industry is doing enough, there is an enormous amount of drug sales taking place audible groups, instagram, facebook groups, the groups on these platforms are the epicenter and this is why industry has to be monitoring these, if you leave this up to users to fly good and they are inside a private or secret group is just not gonna happen, they know what users are getting up to, you they are monitoring all of us all the time so they can sell us this and figure it out. Congressman can i also add that there are two issues, there are the ads but theres also the native content so you heard miss peters say that you want this morning and search not read it and that content is there even if its not on the ads, same with Google Search so theres two places you have to worry, its not just the ads. Very good, thank you, i yield back. The gentleman yields back and now i call on the chairman of our full committee. Thank you madam chair, i wanted to start with miss oyama, then youre written testimony you discuss guidelines for hate speech and that shows that hate speech and abuses on the rights in social media platforms, section 2 30 incentivize platforms, does it also incentivize platforms to take a hands off platform to removing hate speech if you will . Thank you so much for the question, so on the category of hate speech we have a very clear policy against it, that would be speech that incites violence or speech that is hateful against groups with specific attributes, that could be about their race, religion, sex, disability status so that is prohibited, we can be either detected by our machines which is the case in more than 87 , by community bloggers, by individual users and all of those actions that we take, last quarter we saw a increase in the amount of content that our machines were able to find and remove, those are vitally dependent on the protection of ct aid to give supervisors the ability to take content down, we do have claims against this, people may sewage for a defamation and they may have other illegal claims, to 30 is what enables not only google or youtube but any site with User Generated Content any site on the internet to be able to moderate that content. So i think we would just encourage congress to think about not harming the good actor, is the innocent actors are taking this step in an effort to go after truly bad criminal actors who were fully exempted from ceta 230, they should be penalized and Law Enforcement will play an Important Role with bringing that down. Or in civil cases where there is platform liability for bad actors. Thank. You doctor farid, in your written testimony you said that youre proliferation of domestic terrorism, as you know both criminal and still a liability associated with providing materials, we want to start with doctor mcsherry understanding that this doesnt apply to law, have you known that they used to 32 shield themselves from allowing their platform to be used for propaganda which there are ongoing cases and there are several cases their platforms have been accused of violating civil laws and hosting content on their platforms in the evoked section 2 30 in those cases quite successfully and i think, if you look at a lot of these cases its quite appropriate, the reality is its very difficult for platform to tell in advance, or draw the lion in advance polka tent said it is protecting Political Communications air content that steps over allied, so these cases are hard and complicated and they have to get resolved, section 2 30 also creates a space in which because of the additional protections it provides, creates a space for Service Providers when they choose to moderate and enforce their own policy. Let me ask you, do you have any thoughts on how this should be addressed from a technological perspective. I want to start by saying when you hear about this happening and we heard it from red it, you should understand that has come only from intense pressure, from advertisers, it is come from pressure on capitol hill in the eu, it is called from pressure in the press, so there is bad news, bad pr and then we get serious, for years we have been struggling with social Media Companies to do more about extremism and terrorism online and we have hit a hard wall, then east rated putting pressure will, advertisers started putting pressure and we started getting responses. That is exactly what this conversation is about, what is the underlying motivating factor self regulation we will do everything. So the pressure has to come from other avenues and i think putting pressure by modest changes is the right direction and i agree that if you are good actors and they should encourage that change and help us clean up and deal with the problems that we have, but i have been in this fight for over a decade now and it is a very consistent pattern, law eventually you get enough pressure and we start making changes, i think we should get to the ad part of that and recognize that we can do better and start doing better. Thank you madam chair. The now i recognized for five minutes congressman jim for today. Thank you madam chair thank you for being here today, about 20 years ago i harness the power of the internet to launch a business to improve customer service, that company was called right now technology and from a Spare Bedroom in our homes we eventually group that business to be one of the largest employers in montana, we had about 505 highway jobs, and we had about 8 million unique visitors per day and i understand how important section 2 30 can be for a Small Business, this important liability shield has gotten mixed up however with complaints about viewpoint discrimination, i want to cite one particular case, in march of this year missile up east Rocky Mountain Elk Foundation reached out to my office because google had denied one of their advertisements, the foundation and it did what it had done many, times they tragedies paid advertising on the Global Network to promote a short video about a father hunting with his daughter. This time however the foundation received an email from google and i quote, any promotions about hunting practices, even when they are intended as a healthy method of population control or conservation is considered animal cruelty anti inappropriate to be shown on our network. The day i heard about this i sent a letter to google and you are very responsive but the initial position take it was absurd, hunting is a way of life in montana and many parts of the country. Im very thankful that you worked quickly to review that, and i remain very concerned about theyre trying to stifle this and how they are treated, im worried about similar groups have faced similar efforts to shut down their advocacy, we really dont know how many hunting ads google has blocked in the last five years, in my marriage letter i invited google ceo to meet with leaders of our Outdoor Recreation businesses in montana, i havent heard anything back and miss oyama i would extend the invitation again, i think frankly it would help google to get out of silicon valley, come to montana sit down with your customers and hear about the things that are important to them i would be happy to host that visited we would love to meet with you there. I think it is important to understand the work that these groups due to further conservation and to help species thrive and as an avid hunter an outdoorsman myself, i know many businesses in montana focus on hunting and fishing and i worry that they may be denied the opportunity to advertising on one of the largest Online Platforms. I also worry that the overheard burden and Regulatory Regime could hurt businesses and stifle a high tech sector that is rapidly growing, so the invitation is over. One question for you doctor how can we walk this line between protecting Small Businesses and innovation forces over burdens of regulation. It is the right question to ask, i think you have to be very careful because we are a now have near monopolies in the Technology Sector and if we start regulating now Small Companies are not going to be able to there are ways of creating this in the eu in the uk as they are talking about regulations they are creating this for small platforms and has eight millions forces 3 million users so i think we should tread lightly here and i think miss peters made the point that we want to inspire a competition for better Business Models but i think there are mechanisms to do that but we have to think carefully about it. We had a lot a discussion about the efforts youre taking to get criminal activity off the network, i applaud that, we should continue doing that, but as a followup doctor, how do we ensure that content moderation does and become censorship and a violation of our First Amendment . So the way we have been thinking about this is the collaboration between humans and computers, what computers are very good at doing is the same thing over and over again, but what theyre not good is nuance and subtlety and in france in context. So with the child sexual abuse piece the way it france, works if the child is sexually explicit, we fingerprint that content and then remove that piece of content, false alarm raids that we do is about one in 50 billion, that is the scale you need to be operating at so if youre going to Deploy Technology you have to be, the Automatic Technology you have to be operating in a very high scale so computers can do that on their own so we have to get more human moderators. So there are 500 hours of video uploaded a minute, that is not enough moderators, in you can do that yourself, so they look at hours and hours of video but we have to also keep up our human moderation. Okay thank. You miss me oyama i look for to seeing you in montana. The gentleman yields back and now i recognize the congresswoman is next, for five minutes. Thank you madam chairwoman and the Ranking Members, thank you for holding this important hearing, i think many of us here today are seeking to more fully understand how section 2 30 of this Communications Decency act can work well in an effort changing virtual and technological world. This hearing is really significant and as miss oyama said, i want to astronaut forget the important things that the internet has provided us, but also as mr. Huffman said, we, need i said this for red it but we must constantly be of all the on, our policy needs to be evolving to face new challenges will also balancing our Civil Liberties. We have a very important balance, here my question is really surrounded around the question about bad content moderation, i want to start off by saying that the utilization of Machine Learning algorithms and Artificial Intelligence to filter through content, provides an important technological solution to increasing the amount of content to moderate but however as we become more reliant on algorithms we are increasingly finding lion spots and gaps that may be difficult to breach with more and new code will be further marginalize insensitive. So as i thought about this i even thought about groups like the veterans or the African American community in the 2016 election. Doctor farid can you describe some of the challenges with moderation by anger them, including possible bias. So youre absolutely right when we automate at the scale of the internet we are going to have problems and weve already seen, that we know that space recognition does much worse on women and people of color than it does on white man, the problem with the automatic moderation is that it doesnt work at scale. When you are talking about billions of uploads in your algorithm is 99 accurate which is very good, you are still making one and 100 mistakes. It is literally tens of millions of mistakes today you are going to be making at the scale of the internet. So the underlying idea that we did fully automate this, is not take on the responsibility it does not work, so i fear that we have moved to far into that, give it time to find a algorithms and because we dont want to hire noon, because of the expenses, we know that that will not work in the next few years, its a little bit worse than that because it is also assuming that an adversary that is not adapting, we know that the adversary is going to adapt, so we know that all Machine Learning that is manned humidify content is far honorable, you can have small unmanned of contact and fuel this is. So i want to ask you a quick question, both of you talked about the number of human moderators that you had available to you and i know that we have had many hearings on challenges of diversity in the tech field, im assuming that yours or more from the user perspective in terms of moderators, the people that you higher than 10,000 that use have mentioned, these are people that you hired or the users, just a quick so everybody knows. For us it is about 100 employees, of course millions of users participate as well. So the stats and i had mentioned already employees and we work with Specialized Centers and Community Fight again which would be an ngo, Law Enforcement, an average user. I know in the interest of time we dont have a lot of time but could to provide us with information on the diversity of your moderators, that is one of my questions and then also i dont like to make assumptions but im going to assume that it might be a challenge to find diverse populations of individuals to do this role, so if we could have a followup with that and then my last question would be, for the panel, what should the federal government, what should we be doing to help in this space because im really concerned about the capacity to do this do it well, if anybody has any suggestions, recommendations. I think this conversation is helping, i think youre going to scare the Technology Sector i think that is a good thing to do. I have to yield back, amount of time but thank you so much all of you for your work and now last but not least, the representative who will be recognized for five minutes. Thank you madam chairwoman, first of all thank you for being here, on the last one so youre in the home stretch here. Its amazing that we are here today when we think about how far the internet has progressed, one of the greatest inventions in human existence. Connecting the world and giving people a voice well before their stories were never told, providing knowledge at our fingertips, it is just incredible and section 2 30 has been a big part of this providing this and essentially a holding back lawsuits and creating innovation but its also creating a breeding ground for defamation and harassment, for impersonation an election a fear instead earrings global terrorism and other extremism so we have this Wonderful Community on one side and then all these terrible things on the other side. My biggest concern is this rising in the. That is when the icons we hear from my constituents, so i want to start with some basics just to everybodys opinion on it, who do you all think should be the cop, to be the primary enforcer with the choices being ftc or the degree to go down the line and hear what you each. Thank those are my only three answers. You cant give a fourth. So on the platform our users. Im gonna take your second bet its obscene, in the courts because it enforces the companies to be the norm producers. Okay. So i think the courts have a very Important Roles to play but the principle for us is at the end of the day users should be able to control their internet experience and will have many tools to fix that problem. I think it is a ridiculous arguments, the vast majority of people, ive studied organized crime. Hold on out when i answer the question courts and Law Enforcement small percentage of people to statistically than any company for right, you have to control for it. Thanks miss oyama. Our has always been a multi stakeholder approach, but the courts and the ftc to have jurisdiction, as you know the ftc has broader restrictions and were always looking at choices. Thank you doctor farid. I agree with the multi stakeholder we all have a responsibility here. If we were to tighten up rules than would be great to hear first from you doctor farid, if we limit this, do you think that would be enough and whether enough they should be this is well. Im not the one who should be answering that question with due respect. Miss oyama and mr. Huffman. Would objective relief be enough to change certain behaviors. For us to have the power of interactive belief and i would want to echo the Small Businesses in certain places where they do say the framework has created certainty and that is for their moderation economic viability. Thank you. Similar answers or, a would shudder to think what would happen if we were on the smaller and we have to weigh lawyers. You are not in quite a bit, but you have any idea of what we should be looking at . As you say injunctive relief all i can see is the first moment and prior restraints, so i think we need to be careful and the remedies that we think about but if we allow la to operate, if people act unreasonably and recklessly then i think that there is possibly should be available. Lastly i want to talk about section 2 30, im from orlando the land where a fictional wizard is a great asset, may spear is a no you talk a little bit about the issue of including trade deals, how would that be problematic for a regional like ours and intellectual property so critical. Its problematic because it will potentially tie congress is hands and that is exactly why industry is pushing this inside the trade deal. There are 90 pages in copyright language. He. So if we adjust laws here that would affect the trade deals is your opinion. . A there is no language that binds congress is hands, they often have pardons on this, there is nothing on the trade agreement that creates a u. S. Framework when countries like china and russia are developing their own frameworks for the internet, there is nothing in the current industry that would limit your ability to look at 2 30 am decided that they need tweets later on. Thank you i yield back. Gentleman yields back in that concludes our period for questioning and i seek you hit a miss consent to put into the record a letter from creative future, a letter from American Hotel association, america good concealer in technology association, a letter from travel technology association, white here from the airbnb and a letter from common sense in media, a letter from Communications Industry association and, a letter from representative grace, a letter and of the plan act and a letter from the idea to coalition, a letter to the fcc, a letter from tax freedom, a letter from the internet association, a letter from with the media foundation, a letter from the Motion Picture association and an article from the verge a statement from our streak, without objection, and let me think, when we think our witnesses. I think this was a really useful hearing those of you who have we would appreciate free lunch this strange hearing so i want to thank all of you for your thoughtful presentations and youre written term somebody that went way beyond what were able to here today, and i want to remind members that Committee Rules have ten Business Days to submit additional questions for the record to be answered by witnesses who have appeared tonight and i want to ask witnesses to respond to any such question you may receive, and the committees are adjourned, thank you. Ladies and

© 2025 Vimarsana

comparemela.com © 2020. All Rights Reserved.