comparemela.com

Card image cap

Experience that we know today. Whether looking up restaurant reviews on yelp, catching up on s l on youtuber checking on a friend or loved one on social media. These are experiences that we come to know and rely on. And the plot form that we go to to do these things have been enabled the User Generated Content as well as the ability of these companies to moderate that content in creek communities. Section 230 of the Communications Decency act has enabled that ecosystem to does all. By giving Online Companies the ability to moderate content without equating them to the publisher or speaker of the content, we have enabled the creation of Massive Online Community of millions and billions of people that come together and interact. Today, this committee will be examining that world that section 230 has enabled, the good and the bad. I would like to think the witnesses for appearing before us today. Each of you represent important perspectives related to the content moderation in the online ecosystem. Many of you bring up complex concerns in your testimony. I agree this is a complex issue. I know that some of you have argued that congress should amend 232 address things such as online criminal activity in this information and hate speech and i agree these are serious issues. Too many other communities in my hometown of pittsburgh has seen what unchecked hate can lead to. Almost a year ago our community suffered as a nations history. The shooter did so after posting an antisemitic remark for finally posting that he was going in. A similar attack occurred in new zealand and the gunman streamed his despicable acts on social media sites. While some of these sites moved for the spread of the content many did not move fast enough. The algorithms meant the sports highlights and celebrity to go viral and helped him pull fight and in hi heinous act. In 2016 we sell similar issues when foreign adversaries use the power of these platforms against us for the information in the division and distrust in our leaders and institutions. Clearly, we all need to do better. Which i strongly encourage the witnesses before us that represent these Online Platforms and other major platforms to stop it. The other witnesses on the panel bring up serious concern with the kind of content available on your platforms and the impact that content is having on society. As they point out, some of those impacts are very disturbing. That being said, section 230 does not protect the largest platforms or the immersed rent websites. It enables common sections on individual blogs and people to leave honest and open reviews and free and open discussions about controversial topics. The kind of ecosystem that has been enabled by more open online discussions has enriched our lives and our democracy. The ability of individuals to have voices heard particularly marginalized communities and not the understated. The ability of people to post content that speaks truth to power has created Political Movement in this country and others that have changed the world we live in. We all need to recognize the incredible power this technology has for good. As well as the risks that we face when its misuse. I want to thank you again for being here and i look forward for our discussion. I would like to yield the balance of my time to my good friend. Thank you, mr. Chairman i wanted think the witnesses for being here today. In april 2018 Mark Zuckerberg came before congress and said it was my mistake and im sorry. And the influence of 2016 president ial election. Fastforward 555 days, i fear mr. Zuckerberg may have not learned from his mistakes. Recent developments confirm we have all feared, facebook will continue to allow lies once again making the online ecosystem Fertile Ground for election interference in 2020. The decision to remove lately falls and should not be a difficult one choice between hate speech, online bullies in effect driven debate should be easy. If facebook does not want to play the truth and political speech then they should get out of the game. I hope this hearing produces a robust discussion because we need it now or than ever. Mr. Chairman i yield back. Thank you latina recognizes the Ranking Member for the subcommittee for five minutes for his Opening Statement. Thank you mr. Chairman for todays hearing and thank you tour witnesses to appearing before us. Welcome todays hearing on content moderation and review of section 230 of the decency act. This hearing is a continuation of a serious discussion that we began last session on how congress should examine the law and ensure accountability and transparency for the hundreds of men in american using the internet today. We will also witnesses for a balanced group of stakeholders for section 230. They range from large to Small Companies as well as academics. Let me be clear a good lead to a slippery slope is a death by a thousand cuts that some would argue would end up in the internet industry if it was not appealed. Before we discuss whether or not congress should make modifications to the law we should understand how we got to this point in his support to look at 230 and when it was written. At the time the decency portion of telecom act of 1986 included other prohibitions on objectionable or the content of the internet. Provisions that were written to target obscene content were obstructed by the Supreme Court it was intended the Interactive Computer Services to proactively take down offensive content. To help control of the portals of our computer so it comes in and what our children see. It is unfortunate that such broad interpretation of section 230 with the broad liability platforms having to demonstrate that they are doing, Everything Possible instead of encouraging use numerous platforms have hidden behind the shield and used tools to avoid litigation without having to take responsibility. Not only our Good Samaritan are being selective in taking down harmful or illegal activities but section 230 has interpreted so broadly the head samaritans can skate by without accountability. That is not to say all platforms were afforded by congress, many do agree they can do many great things many is a bigger platforms for billions and not to the account annually. Often times these are the exception, not the rule. Today will dig deeper to learn how platforms aside to remove content whether its with the tools provided by section 230 or your own self constructed terms of service. Under either authority we should encourage forstmann to continue. Mr. Chairman i thank you for holding this important hearing so we can have an open discussion on intent of 230 and if we should reevaluate the law. We must ensure the platforms are hold reasonably accountable for activity on the platform without drastically affecting the innovative start. With that i yield back the baltimore time. The gentleman yields back. This is a joint hearing between her committee and the committee on Consumer Protection and congress i would like to recognize the chair of the committee for five minutes. Thank you, mr. Chairman and good morning and all the panels for being here. The internet certainly has improved our lives in many, many ways in enabled americans to more actively participate in society, education and commerce. Section 230 of the Communications Decency act has been at the heart of the United States internet policy for over 20 years. Many say this law allows speech to force allowing the internet to grow into what it is today. In the early days of the internet it was intended to encourage Online Platforms to moderate User Generated Content to remove offensive, dangerous or illegal content. The internet has come a long ways since the law was first enacted. The amount and sophistication of User Postings has increased exponentially. Importantly, the number of americans who report experiencing extremism and extreme Online Harassment which include Sexual Harassment, stalking, bullying and threats of violence have one up over the last two years, 37 of users say they experienced that this year. Likewise extremist extremism, hate speech, election interference and other problematic content is plural freighting. The spread of such content is problematic that is for sure. And actually it causes real harm. That multibillion Dollar Companies like facebook, google and twitter cant or wont fix. If this were not enough, cause for concern, more forprofit businesses are attempting to use section 230 as a liability shield and actively they have nothing to do with thirdparty or content moderation policy. In recent Washington Post article executives seem to open the door to claiming lost immunity from labor, criminal in local profit liability based on section 230. This would represent a major unraveling of 200 years of social contract, Community Governance and congressional intent. Although also an issue as the federal trade Commission Section five authority on unfair or deceptive practices, the ftc versus section five cases, website generated content but the terms of Service Violations for thirdparty content may also be precluded by the 230 immunity. I wanted to talk about injecting 230 into trade agreements. It seems that we have already seen that in the japan trade agreement and there is a real issue to include that now in mexico, canada u. S. Trade agreement. There is no place for that. I think that the laws and the other countries do not really accommodate what the United States has done about 230. The other thing we are having a discussion, an important conversation about 230. In the midst of the conversation because of all the new developments, i think it is just inappropriate right now are this moment to insert this Liability Protection into trade agreements and as a member of the working group that is helping to negotiate the agreement, i am pushing hard to make sure that it just is not there. I dont think we need to have any adjustment to 230, issue just not be in trade agreements. So all of the issues that we are talking about today indicate that there may be a larger problem at 230 no longer is achieving the goal of encouraging platforms to protect their users and today, i hope that we can discuss holistic solutions, not talking about eliminating 230 by having a new look at that in the light of the many changes that we are seeing into the world right now. We want to i look forward to hearing from our witnesses and how it can be made even better. I yield back. The china recognizes the Ranking Member of the committee ms. Roger. Good morning. Welcome to todays joint hearing on online content management, as republican leader on the consumer subcommittee by priority to protect consumers while preserving the ability for Small Businesses and startups to intervene. In that spirit we are discussing Online Platforms in section 230 of the communication decency act. In the early days of the internet, two companies were sued for content, posted on the website by users. One company sought to moderate content on the platform and the other did not. In deciding these cases the court found the company that did not make any content decisions was immune from liability. But the company that moderated content was not. It was after these decisions that congress created section 230. Section 230 is intended to protect Interactive Computer Services from being sued over what users post while allowing them to moderate content that may be harmful illicit or illegal. This Liability Protection plated critical and Important Role in how we regulate the internet. To allow Small Businesses and integrators to thrive online without the fear of regrowth loss are looking to make a quick buck. Section 230 is also largely misunderstood. Congress never intended to provide immunity only to websites who are neutral. Congress never wanted platforms to simply be neutral conduits but in fact wanted platforms to moderate content. The Liability Protection extended to allow platforms to make good faith efforts to moderate material that is obscene, loose, excessively violent or harassing. There is supposed to be a balance to section 230. Small Internet Companies enjoy a safe harbor to innovate and force online while also Incentivizing Companies to keep the internet clear of offensive and violent content by empowering these platforms to act and clean up their own site. The internet revie revolutionize freedom of speech by providing a platform for every american to have their voice heard and to access an infinite amount of information at their fingertips. Medium and other online blogs provided a platform for anyone to write. Wikipedia provides free indepth information on almost any topic you can imagine through mostly user generated and moderated content. Companies that started in dorm rooms and garages are now global powerhouses. We take great pride in being the Global Leader in tech and innovation. But while some of our Biggest Companies have grown, have they mature . Today is often difficult to go online without seeing harmful, disgusting or illegal content. To be clear i fully support free speech which society benefits from open dialogue and Free Expression online. I know there has been calls for Big Government to mandate or dictate free speech or ensure fairness online and is coming from both sides of the aisle. I share similar concerns that others have expressed and are driving some of the policy proposals, i do not believe these are consistent with the First Amendment. Republicans successfully fought to repeal the fcc fairness doctrine for broadcast regulation during the 1980s. I strongly caution against advocating for similar doctrine online. It should not be the fcc, ftc or any Government Agency job to moderate freespeech online. It instead we should continue to provide oversight of big tech in their use of section 230 and encourage structure of content. This is very. How do we ensure they are responsibly earning their Liability Protection. We Want Companies to benefit not only from the shield but also use the sword congress afforded them to rid their sites of harmful content. I understand its a delicate issue and certainly we renew them. I want to be very clear im not forgetting section 230, its for consumers and entities in the internet echo system. Misguided and hasty attempts to amend or repeal section 230 for biased or other reasons could have unintended consequences were free speech and ability for Small Businesses to provide new and innovative services. At the same time it is clear we reached a point where its income it upon us as policymakers have a serious and thoughtful discussion about achieving the balance on section 230. I thank you for the time and i yield back. The chair recognizes chairman of the committee for five minutes for his Opening Statement. Thank you chairman. The intermittent is one of the single greatest Human Innovations that expresses community and Economic Opportunity with trains of dollars of exchanged online every year. One of the principal laws that pave the way for the internet to flourish is section 230 of the communication decency act which is part of the Telecommunications Act of 1996. We enacted the section to give platforms the ability to moderate their sites and protect consumers without excessive risk of litigation and to be clear section 230 has been an incredible success. In the 20 years since section 230 was law the internet is more complex and sophisticated the 1986 the Global Internet reached 36 million users, less than 1 of the world population. Only one in four americans reported going online every day. Compare that to now when all of us are online almost every hour that we are not sleeping in earlier this year the internet passed 4. 39 billion users worldwide. Here in the u. S. Theres about 230 million smart phones that provide americans access to Online Platforms. The internet is a central part of our economic fabric in a way that we cannot have dreamed up when we passed the Telecommunications Act. With that complexity and growth we have seen the darker side of the internet growth. Online radicalization has spread leading to Mass Shootings in the schools, churches and movie theaters, International Terrorists are using the internet to groom recruits. Platforms have been used for the illegal sale of drugs including those that spark the Opioid Epidemic. Foreign governments and fraudsters to polluted campaigns using new Technology Like deep fakes designed for civil unrest and disrupt democratic elections. There are constant attacks against women, people of color and other minority groups. Perhaps most despicable of all is the horrendous sexual exportation of children online. In 1998 there were 3000 reports of material depicted in the children online. Last year 45 million video reports were made. While platforms are now better detecting and removing this material recent reporting shows Law Enforcement officers are overwhelmed by the crisis. These are issues that we cannot ignore in Tech Companies need to step up with new tools to address the Serious Problems in each of these issues demonstrate how online content moderation have not stay true to the values underlying section 230 and has not kept pace at the increasing importance of the goebel internet. There is no easy solution to keep this content off the internet as policymakers we have are ideas of how we might tackle the symptoms, the content moderation online and also protect free speech but we must seek to fully understand the breadth and depth of the internet today, how it is changed and how it can be made better and we have to be thoughtful, careful and bipartisan in our approach. Its with that in mind i was disappointed that investor lighthizer, u. S. Trade representative refused to testify today, the u. S. Has included language in which a section 230 in the United States, mexico and Canada Agreement in the u. S. Japan trade agreement and Ranking Member and i wrote to the ambassador in august raising concern about why the ustr has included the language in the trade deal as we debate them across the nation and i was hoping to hear his perspective on why he believes that was appropriate. Including provisions and trade agreement that are controversial to democrats and republican is not the way to get support from congress obviously. Hopefully investor will be more responsive to bipartisan request in the future and with that mr. Chairman i will yield back. The gentleman yields back. The chair would like to remind members pursuant to the Committee Rules, all members written Opening Statement shall be made part of the record. Can mine be made part of that customer. I apologize the chair yield to my good friend and Ranking Member. Times have changed. [laughter] thank you, mr. Chairman and i want to welcome our witnesses, thank you for being here its important work and ill tell you on the offset we have another subcommittee meeting upstairs to all be bouncing in between but i have your testimony and look forward to your comments. Its without a question of experts in this field. Were blessed to have you here. Last congress we held significant hearing that jumpstarted the state of online protections as well as the legal basis of the modern internet ecosystem and of course the future of content moderation and algorithms determine which of what we see online, thats an issue the constituents want to know more about. Today we will undertake a deeper review of section 230 of the Communications Decency act portion of the 1996 telecommunication act. In august of this year chairman and i raise the issue of the appearance of export language nearing section 230 and trade agreement. We did that in the letter to the u. S. Trade representative robert lighthizer. We express concerns of the internet policy being taken out of the content of its intent and in the future the United States trade representative should consult our committee in advancing negotiating on the very issues. Rather than cherry picking just a portion. I want to go back to the trade case. I thought the letter to the ambassadorwas going to send the right message. Dont try to blow up us mca. I voted for everytrade agreement, and i am a big free trader but were getting blown off on this and im tired of it. Then wefound out its in the japan agreement. Clearly theyre not listening to our committee or us so we are serious about this matter and ive heard from the usd are and this is a real problem so take note. If we refer to section 230 as the words that created the internet as has been popularized by some were already missing the mark since by our word count which you can use software to figure out, that includes the Good Samaritan obligation so we should Start Talking more about that section as the 83 words that can preserve the internet area all the sections and provisions should be taken together and not a part and many of our concerns could be addressed if Companies Just enforced their terms of service but in better context i believe a History Lesson is in order. Todays internet looks different than compuserve when messages dominated the internet in the 90s. While the internet is more dynamic and content rich there were problems in its infancy managing the vast amounts of speech online. Chris cox, former member of the legislation along with this committee did out on the house floor during debate on his amendment, no matter how big the army of bureaucrats, its not going to protect my kids because i dont think the federal government will get there in time so congress recognized then as we did now we need companies to step up the to the plate and curb illegal content from their platforms. The internet is not something to be managed by government. The act bestowed on providers and users the ability to go after the legal and human content without being held liable. While the law was intended to empower we have seen social media platform flow to clean up while being quick to use immunity for such content. In some cases internet platforms and shirk the responsibility for the content on their platforms. The liability shield in place through common law as obscured the central bargain that was struck and that is the internet platforms with User Generated Content are protected in exchange for the ability to make good faith efforts to moderate harmful content so let me repeat for those that want to be included in the Interactive Computer Services definition. And force your own terms of service. I look forward to a discussion on the shaded speech from illegal content, how we should think of cda 230 protections versus large ones and various elements of the ecosystem shape what consumers see or dont see area thank you for having this hearing and i look forward to getting all the feedback from the witnesses but i have to vote for the other hearing clicks the administration doesnt listen to you guys either ill let my statement speak for itself clearly, we will find out if there listening or not. I will reiterate that members pursuant to the Committee Rules all members written statements will be made part of the record. We want to introduce our witnesses for todays hearing, mister Steve Hoffman, ceo of reddit, welcome. Miss danielle citron, professor of law at Boston University school of law. Doctor mcsherry, legal director of the Electronic Frontier foundation, welcome. Miss gretchen peters, executive director of the alliance to counter prime online. Miss catherine oyama, head of International Policy property for google and doctor farid, professor at theuniversity of california at berkeley. We want to thank you for joining us today. We look forward to your testimony. At this time the chair will recognize each witness for five minutes to provide Opening Statements. For we begin i like to explain our lighting system area in front of you is a series of lights, the light will initially be green. A light will turn yellow when you have one minute remaining. Please wrapup your testimony at that point. When the light turns red, we cut your microphone off. We dont, but try to Mission Fishing finished before then. Chairpersons, Ranking Members, members of the committee, thank you for inviting me. I name is Steve Hoffman and im the cofounder and ceo of reddit and im here to share why 230 is critical to the open internet. It moderates content in a fundamentally different way, we empower communities that rely on 230. Changes to 230 close existential threats not just us but thousands of startups across the country. It would destroy what competition remains in our industry. A College Roommate and i started reddit in 2005 as a forum to find news and interesting content. Since then its grown into a vast Community Driven site where people find not just news and a new laugh but perspective and a sense of belonging. Reddit is communities that are created and moderated by our users. A model thats taken years to develop with Lessons Learned along theway. I left the company in 2009. For a time, reddit lurched from crisis to crisis over questions of moderation that were discussing today. 2015 i came back because i realized the vast majority of our communities were providing an invaluable experience and reddit needed better moderation. The late reddit handles moderation is unique in the industry, a model akin to our democracy where everyone follows rules to self organize andultimately share some responsibility for how the platform works. First we have our content policy fundamental rules that everyone on reddit must follow. Think of these as our federal laws. We employ a group collectively known as the antievil team to enforce these policies. Below that each Community Creates their own rules. These rules written by our volunteer moderators are carried through the unique needs of their communities and tend to be more specific and complex. Self moderation our users do is the most scalable solution to the challenges of moderating content. Individual users play a crucial role as well. They can vote up or down on any piece of content and reportedto our antievil team. Through this system , users can accept or reject any piece of content thus turning every user into a moderator. The system isnt perfect. Its possible to find things on reddit to break the rules but effectiveness has improved. Analysis has shown our approach to be largely effective in curbing bad behavior and when we investigated russian attempts at manipulating our platform we found tried, us than one percent made it past routine defenses of arcing , Community Moderation and down votes from everyday users. We constantly evolve our content policy and since my return wevemade a series of updates addressing violent content, pornography, controlled goods and harassment. These are just a few of the ways weve worked moderate in good faith which brings us to the content of what it would look like without 230. For starters, we are forced to defend against anyone with enough money to bankroll a lawsuit. Its worth noting the cases most commonly dismissed under 230 are regarding defamation. Its an open platform where people are allowed to voice opinions would be a prime target for these, enabling censorship through litigation. Even targeted in its february 2, 1930 will create a Regulatory Burden on the industry, benefiting the Largest Companies by placing a significant cost on smaller competitors. We have 500 employees and a large user base, more than enough to be considered a Large Company and we are an underdog compared to our nearest competitors where our Public Company attend to be 100 times our size but we recognize that theres truly harmful material on the internet and we are committed to fightingit. Its important tounderstand rather than helping , keeping narrow changes and undermine the power of community and horrible mobile. It could be Opioid Epidemics, raising discussions on 230. We have many communities where users modeling can find support tohelp them on their way to sobriety. Without carveout on this area, they become too risky, forcing us to close them down. This would be a disservice to the people struggling that this is exactly the type of decisionthat restrictions on 231 forced on us. Section 230 is a uniquely american law. Its a balanced approach that has allowed the internet to flourish while also incentivizing good faith attempts to mitigate the unavoidable downsides of Free Expression. While these downsides are serious and demanding attention of us and industry and you in congress , they do not outweigh the overwhelming good at 230 has enabled. Ilook forward your question. Your recognized for five minutes. Thank you for having me and having such a thoughtful badge. With me on the panel. When congress adapted section 230 20 years ago, the goal was to incentivize Tech Companies moderate content and although Congress Course one to the internet, what they could imagine at that time to be open and free, they also knew that openness would risk offensive material and im going to use their words. And so what they did was devise an incentive, a legal shield ordinance americans who are trying to clean up the internet. Both accounting for the failure so under filtering and over filtering of content. The purpose of the statute is clear but if interpretation, the words werent so what weve seen our courts massively overextending section 230. Two sites that are irresponsible in the extreme and thatproduce extraordinary harm. Weve seen the liability shield the applied to sites whose entire Business Model isabuse. Revenge porn operators and sites that all they do is cure users the fake sex videos, they get to enjoy betty immunity and interestingly, not only is it bad samaritans who have enjoyed illegal shields from responsibility but it also cites that have nothing to do with speech. That traffic in Dangerous Goods like arms list. Com. And the costs are significant. This overbroad interpretation allows bad samaritans sites reckless irresponsible sites to half costs on peoples lives and im going to take the case of Online Harassment because ive been studying the last 10 years. Costs are significant to women and minorities. Online harassment is often hosted on these sites is costly to peoples central Life Opportunities so when a group search of your name contains rate threats, your new photo without your consent, your home address as youve been docs and life defamation, its hard to get a job and its hard to keep a job and for victims, they are driven off line in the face of online assaults. There terrorized, there often change their names and they move. So in many respects, the calculus, the freespeech calculus is not necessarily a win for free speech as we are seeing diverse viewpoints and individuals being chased off line. So now the market i think ultimately is not going to solve this problem. So many of these businesses, they make money off of Online Advertising and salacious content that attracts eyeballs so the market itself i dont think we can rely on to solvethis problem. Of course, legal reform. The question is how should we do it. . We have to keep section 230, it has tremendous upside but we should return it to its original purpose which was to condition the shield on being a Good Samaritan, on engaging in what been witness and i have called reasonable contentmoderation practices. There are other ways to do it and in my testimony i sort of draw some solutions, but weve got to do something is doing nothing has costs. It says the victims of online abuse that their speech and their equality is less important than the business process of some of these sufferers. The chair recognizes doctor mcsherry for five minutes. As legal director for the Electronic Frontier foundation i want to thank the chairs, Ranking Members, numbers of the committee for the opportunity to share our thoughts with you today on this important topic. For nearly 30 years, efs has represented the interests of Technology Users both in court cases and broader policy debate to ensure that law and Technology Supports our civil liberties. Like everyone in this room, we are well aware online speech is not always pretty. Sometimes its extremely ugly and causes serious harm. We all want an internet where we are free to me , create, organize, share, debate and learn. We want to have control over our online experience and to feelempowered by the tools we use. We want our elections refrom manipulation and for women and marginalized communities to be able to speak openly about their experiences. Chipping away at the legal foundations of the internet in order to pressure forms to Better Police the internet is not the way to accomplish those goals. Section 230 made it possible for all kinds of voices to get their message out to the whole World Without having to acquire a broadcast license, owning a newspaper or learn how to code. The law has thereby helped remove much of the gatekeeping that once stifled social change and perpetuated power imbalances. And thats because it doesnt just protect tech giants. It protects regular people. You forwarded an email, a news article, a picture or piece of political criticism, youve done so with the production of section 230. If you maintain an online forum for a neighborhood group, youve done so with the production of section 230. If you use wikipedia to figure out where George Washington was born, youve benefited from section 230 and if you are viewing online videos, documenting events realtime in northern syria, youre benefiting from section 230. Intermediaries the social media platforms, new states or email forwarders are protected bysection 230 just for their benefits , they are protectedso they can be available to all of us. Theres another practical reason to resist the impulse to amend the law to pressure platforms to moderate user content. Simply put, theyre bad at it. Theres efs and many others have shown, they regularly take down all kinds of valuable content partly because its often difficult to rock your clear lines between lawful and unlawful speech scale. Those mistakes often silence the voices of already marginalized people. Moreover increased liability risk will inevitably lead to over censorship. Its a lot easier and cheaper to take something down and to pay lawyers to fight over it. Particularly if youre a smaller business or a nonprofit. Automation is not a magical solution context matters. Very often when youre talking about speech and robots, theyre pretty bad nuance. For example, in december 2018 blogging platform, announced a ban on content. In an attempt to explain the policy, they identified several types of content that would be unacceptableunder new rule. Shortly after tumblers on filtering technology flagged those images as unacceptable. The last reason. New legal burdens are likely to stifle competition. Facebook and google afford to throw millions that moderation, automation and litigation. But smaller competitors or wouldbe competitors dont have that kind of budget. So in essence we would have opened the door to a few companies and then land that door shut for everyone else. The free and open internet has never been fully free or open and the internet can amplify the worst of us as well as the best. But the internet still represents and embodies an extraordinary idea that anyone with a Computing Device and connect with the world, tell their stories, organize, educate and learn. Section 230 else make that idea a reality and its worth protecting. Thank you and i lookforward to your questions. Iq. Mister peters, you are recognized for five minutes. This thing with members of the subcommittee is an honor to be here today to discuss one of the premier security threats of our time, one that congress is wellpositioned to solve. I am the executive director of the alliance the counter crime online. Our team is made up of academics, security experts, ngos and citizen investigators who come together to eradicate Serious Organized Crime and terror activity. I want to thank you for your interest in our research and for asking me to join a panel of witnesses to testify area like me i hope to hear the testimony of the us trade representative because you think cda 230language out of americas trade agreements is critical to our national security. I have a long history of tracking organized crime and terrorism. I was a war reporter and i wrote a book about the taliban and the drug trade that got me recruited by us military leaders to support our intelligence community. Ive met Terror Networks for special operations command, the eight and centcom. In 2014 i received the apartment funding wildlife supply chains and thats when my team discovered the largest Retail Markets for endangered species are actually located on social media platforms like facebook. Founding the alliance to counter crime online looks at crime more broadly than wildlife as taught me the incredible range and scale of Illicit Activity happening online. It is far worse than i ever imagined. We can and must get this under control. Under the original intent of cd 230, cda 230, there was supposed to be a shared responsibility between tech platforms, Law Enforcement and organizations like acco but tech firms are failing to uphold their end of the bargain by broad interpretations by the court they have safe harbor. Deflecting the stream they tried to convince you that most illegal activity is confined to the dark web thats not the case. Surface web platforms provide the same anonymity, Payment Systems and in much greater reach of people. Were talking illicit groups ranging from mexican drug cartels chinese triads that have webinars social media platforms. Im talking about us publicly let listed social media platforms to move a wide range of goods. Now were in the midst of a Public Health crisis, the Opioid Epidemic which is claiming the lives of 60 million americans each year but facebook a large Worlds Largest social Media Company only began tracking drug postings last year and within six months the Firm Identified 1. 5 million posts selling drugs. Thats whatthey admitted to removing. To put that in perspective thats 100 times more postings than the notorious dark web site the silk road ever carry. Study after study by acco members and others have shown widespread use of google, twitter, facebook, reddit, youtube to market and sell fentanyl, oxycodone and other addictive substances to us consumers in direct violation of us law, federal law in every major internet platform has a drug problem. Why wes and mark there is no law that holds tech firms responsible even when a child dies buying drums on internet platforms. Tech firms for an active role in spreading harm. Their algorithms originally designed wellintentioned to connect friends also help criminals and terror groups connect to a global audience. Isis and other terror groups use social media to recruit, andres and spread their propaganda. The acco Alliance Includes an Incredible Team of syrian archaeologists regarding the trafficking of thousands of artifacts plundered from sites and sold in many cases by isis reporters. It is a war crime. We are tracking groups on instagram, google and facebook wherein endangered species are sold. Items including rhino horn and elephant ivory to chimpanzees and cheetahs. The size of these markets is threatening species withins extension. I could continue to sit here and horrify you all morning. Illegal dogfighting, live videos of children being sexually abused, humanremains , counterfeit goods. Its all just a few clicks away. The Tech Industry routinely claims that modifying pda 230 is a threat tofreedom of speech. But pda 230 is a law about liability, not freedom of speech. Please try and imagine another industry in this country has ever enjoyed such an incredible city from congress, total immunity the matter what harm their product brings to consumers. Firms that have implemented internal controls to prevent Illicit Activity from occurring it was cheaper and easier to stare while looking the other way. They were given this incredible freedom and they have no one to blame but themselves for squandering it. He wants uniforms to the law district communities for hosting terror and crime content to regulate the firms must report crime and terror activity to Law Enforcement and appropriations to Law Enforcement to contend with this data. If its illegal in real life, ought to be illegal to hosted online it is imperative we reform cda 230 to make the internet as safer place for all. The gentle lady yields back. Ms. Oyama, you are recognized forfive minutes. Ranking member and rogers, distinguished members of the committee, thank you for the opportunity to appear before you today. I appreciate your leadership on these issues and welcome the opportunity to discuss both work in these areas. My name is katie oyama and im the head of it policy at google and i advised the company on Public Policy frameworks for the management and moderation of online content of all kinds area and at google our mission is to organize and make the world information universally accessible and useful. Our services and many others are positive forces for creativity, learning and access to information. Thiscreativity and innovation continues to yield Enormous Economic benefits to the United States. However, like all means of medication that came before it the internet has been used for both the best and worst of purposes this is why in addition to respecting local law we have robust policies, procedures and Community Guidelines that govern what activity is permissible on our platform and update their regularly meet the needs of our users in society and my testimony today will focus on three areas. The history up to 30 and how it has helped the internet grow, how to 30 contribute to our efforts to take down content and googles policies across our content. Section 230 has created a robust internet ecosystem where commerce, innovation and Free Expression dry while also enabling providers to take aggressive steps fight online abuse. Digital platforms millions of consumers and legitimate content across the internet, facilitating 29 trillion in online commerce each year. Addressing legal content is a shared responsibility and our ability to take action on content is underpinned by 230. Law not only clarifies when services can be held liable for thirdparty content. The Legal Certainty necessary for services takes with action against harmful content. Section 230 Good Samaritan permission was introduced to incentivize the selfmonitoring and facilitate content moderation. It does nothing to alter platform liability for violations of federal criminal laws are expressly exempted from the scope of the cda. Over the years the importance of section 230 has grown and is critical in ensuring economic growth. A study found over the next decade to 30 will contribute an additional 4. 25 million jobs and 440 billion in growth to the economy. Investors in the Startup Ecosystem have said weakening online safe harbor would have a recession impact on investment and internationally 30 is a differentiator for the us. China, russia and others a different approach to innovation and censoring speech online. Sometimes including speech that is critical of political leaders. Perhaps the best way to understand the importance is to understand what might happen if it werent employed. Without 230, search engines, political blogs, review sites of all kinds would either not be able to moderate content at all or they would over block, either way harming consumers and businesses that rely on their Services Every day. Without 230, platforms could be sued or decisions around the removal of content such as pc, mature content or videos related to appear and it seems and because of 230 we can and do enforce the policies that ensure that our platforms are safe, useful and vibrant. For each product we have a specific set of rules and guidelines that are suitable for the type of platform and the rest of harm. These are clear content policies and Community Guidelines lighting mechanisms to report content that violates them to increasingly effective Machine Learning that can facilitate removal of content at scale for a single human user has ever been able to access it. In threemonth period from april to june 2019 they moved over 9 million videos from a platform for violating our guidelines and 87 percent of this content was flagged by machines first rather than by humans and of those detected by machines 81 percent of that content was never viewed by a single user. We now have 10,000 people across google working on content moderation and have invested hundreds of millions of dollars for these efforts and in my written testimony i go into further detail about our policies and procedures or tackling content on search, google ads and youtube. We are committed to being responsible actors are part of the solution. Google will continue to invest in the people and technology to meet this challenge and we look forward to continued collaboration with the committee as it examines these issues. Like you for your time and i look forward to taking your questions. Doctor free, you have five minutes. Members of both subcommittees, thank you for the opportunity to speak today. Technology as youve heard and internet have had a remarkable impact on our lives and society. Many educational entertaining and inspiring things have emerged from the past two decades and innovation at the same time many horrificthings have emerged. A massive proliferation of child sexual abuse, the radicalization of international terror. The distribution of deadly drugs, disinformation campaigns designed to so civil unrest and disrupt democratic elections. The proliferation of deadly conspiracy theories. The routine and deadly harassment of women and underrepresented groups in threats of Sexual Violence and revenge and nonconsensual pornography, small and large scale fraud and failures to protect our personal and sensitive data. How in 20 short years did we go from the promise of the internet to democratize access to knowledge and make the world more understanding and enlightened to this litany of corners. A combination of nacvetc, ideology, will growth at all costs have led the titans to fail to install safeguards on their services. The problem that they face today is not new. As early as 2003 was wellknown the internet was able or child predators. Despite Early Morning the Technology Sector drive their feet are the mid2000 and not respond to the known problems at the time, nor did they put in place the proper safeguards to contend with what should have been the anticipated problems we face today. In defense of the Technology Sector, they arecontending with an unprecedented amount of data. 500 hours of video uploaded to youtube every minute, from 1 billion daily uploads to facebook become 500 tweets per day, on the other hand these same companies have had over a decade to get their houses in order and have failed to do so and at the same time they have managed to profit handsomely by harnessing the scale and volume of the data thats uploaded to their Services Every day. These services dont seem to have trouble dealing with unwanted material serves their interest. They remove Copyright Infringement and effectively remove legal adult pornography because otherwise their services would be littered with pornography during away advertisers. During its 2000 congressional testimony Mister Zuckerberg and vote Artificial Intelligence as the savior for content moderation and we are told 5 to 10 years. Putting aside not clear what we should doin the intervening decade or so , this claim is almosthurt me overly optimistic. For example earlier this year facebooks chief Technology Officer showcased facebooks latest Ai Technology for discriminating images of broccoli from images of marijuana. Despite all the latest advances in ai and pattern recognition the system is only able to perform the tasks with an average accuracy of 91 percent. This means approximately one in 10 timesthe system is wrong. A scale of 1 billion uploads a day this technology cannot possibly automatically moderate content. And this discrimination test is much easier than the task of identifying a broad class of child location, extremism and disinformation material. The promise of ai is just that, a promise and we cannot wait a decade or more with the hope that ai will improve by nine orders of magnitude when itmight be able to contend with content moderation. To complicate things even more, earlier this year Mister Zuckerberg announced facebook is implementing endtoend encryption on services preventing anyone, the government, facebook from seeing the contents ofany communications. Implementing end to end encryption will make it more difficult to contend with a of abuses enumerated at the opening of my remarks. Canada must be better when it comes to content contending with the most violent dangerous and hateful content online. I reject the naysayers argue it is too difficult on the policy or technological perspective or those that say reasonable and responsible content moderation will lead to the stifling of an open exchange of ideas. I look forward to takingyour questions. Thank you doctor free. We concluded our openings, were going to move to member questions. Each member will have five minutes to ask questions of our witnesses and i will start by recognizing myself for fiveminutes. I have to say, when i said at the beginning of my remarks is isa complex issue , its a very complex issue and i think weve all heard the problems. What we need to hear his solutions. Let me just start by asking all of you just buy a show of hands who thinks that Online Platforms could do a better job ofmoderating their content on their websites . So thats unanimous. I agree and i think its important to note that we all recognize content moderation online is lacking in a number of ways and that we all need to address this issue better and if not you who are the platforms and the experts in this technology, and you put that on our shoulders, you may see a lot of that you dont like very much and that has a lot of unintended consequences for the internet. I would say to all of you, you need to do a better job. You need to have an industry getting together and discussing better ways to do this. The idea that you can buy drugs online and we cant stop that, to most americans hearing that, they dont understand why thats possible. Why it wouldnt be easy to identify people that are trying to sell illegal things online and take those sites down. Child abuse, its very troubling. On the other hand i dont think anybody on this panel is talking about eliminating section 230. So the question is what is the solution between not eliminating 230 because of the effects that would have just on the whole internet and making sure that we do a better job of policing this . Read it, a lot of people know reddit but its a relatively Small Company when you place it against some of the giants and you hook many communities and you rely on your volunteers to moderate discussions but i know that you shut down a number of controversial some reddit that have spread keepsakes, violent the content and dangerous conspiracy. But what would reddit look like if you were legally liable for the content your users posted or for your companys decision to moderate usercontent in communities . What reddit would look like was we be forced to go to one of two extremes. In one version we would stop looking. We would go back to the pre230 era which means if we dont know, we are not liable. And im sure is not what you intend and its not what we want. It would be not aligned with our mission of bringing community and belonging to everybody in the world. The other extreme would be to remove any content orprohibit any content that could be remotely problematic. And since reddit is a platform where 100 percent of our content is created by our users, fundamentally undermines the way reddit works. Its hard to give you an honest answer of what reddit look like as im not sure reddit as we know it could exist in the world where we had to remove all User Generated Content. Doctor mcsherry, you talk about the risk to free speech if it were to be revealed or altered but what other tools could congress use to incentivize Online Platforms to moderate dangerous content and encourage a deadlier online ecosystems. What youre with your recommendation should beshort of eliminating 230 . From a number of the problems that we talked about today which i think everyone agrees are very serious and i want to underscore that, are actually often addressed by existing laws that target the conduct itself. For example, in the harmless case he had a situation where what arms list, the selling of the guns that was so controversial was actually perfectly legal under wisconsin law. Similarly, many of the problems that we talked about today are already addressedby federal criminal law. They already exist so section 230 is not a barrier because of course theres a car for federal criminal laws so i would urge this committee to look carefully at the laws that target the actual behavior that we are concerned about and perhaps start their. Miss peters, you did a good job glorifying us with your testimony. What solution do you offer of revealing 230 . I dont propose repealing 230 three and i think we want to continue to encourage innovation in this country. Our core economic, core driver of our economy but i do believe that if, if cda 230 should be revised so that if something is illegal in real life, it is illegal to hosted online area i dont think that is an unfair burden for tech firms. Certainly some of the wealthiest firms in our country should be ableto take that on. I have a Small Business. We have to run checks to make sure when we do business with foreigners that we are not doing business with somebody that on a terror blacklist. Is it so difficult for Companies Like google and reddit to make sure theyre not posting an illegal pharmacy . I see my time is getting expired but i think you and i think we just of your answer. Chairman now yields to my Ranking Member for five minutes. Again, thanks to your witnesses. I dont know if i could share with you a recent New York Times article outlined the horrendous nature of child abuse online and how it has exponentially growing over the last decade. My understanding is Tech Companies only legally required to report images of child abuse when they discover it and that requires you to actively look for it. I understand you make voluntary efforts to look for this content, how can we encourage platforms to better enforce the terms of service or proactively use their Service Provided by subsection c2 of 230. Is a good faith efforts to create accountability within platforms. You for the question and particularly for focusing on the importance of section c2 to incentivize platforms to moderate content. I can say for google we think transparency isimportant so we publish our guidelines. We publish our policy. We publish on youtube a quarterly transparency report where we show across the different categories of content what is the volume of content weve been removing and also allow for users to appeal their content is stricken and they think that was a mistake i also have the ability to appeal and track what is happening with the appeal we understand this piece of transparency is critical to user trust and for discussions with policymakers on these important topics. Miss citron, a number of defendants have claimed section 230 immunity in the courts, some of which are techplatforms that may not use content and all. Was section 230 intended to capture those platforms mark. Platforms are solely responsible for the content. Theres no, the question is theres no User Generated Content and their grading content, thatthe question be covered by the legal shield up to 30. Im asking, is that the question . No, they would be responsible for content created and developed so section 230, that legal shield would not apply. Mister farid, are there tools available like total dna or copyright iv flag the sale of Illegal Drugs online mark the ideas platforms to be incentivize to scan their platforms and takedown blatantly illegalcontent , shouldnt the words or other indicators associated with opioids the searchable through anautomated process . The short answer is yes. Theres two ways of doing content moderation. Once materials have been identified, typically by human moderator thats child abuse , Illegal Drugs, terrorism related material, whatever that is, Copyright Infringement and the fingerprinted and then stopped from future upload and distribution. Technology has been well understood and deployed for over a decade. I think its been deployed dynamically across platforms and not merely aggressively enough thats one form of content moderation that works today. The second form is what i call the bay zero, finding the christchurch video on upload. That is difficult and still requires Law Enforcement, journalists for the platforms themselves to find the winds that content has been identified it can be removed from future upload and ill point out that today you can go on to google and type five fentanyl online and it will show you in the first page illegal pharmacies where you can click and purchase fentanyl. That is not a difficult fine. Were not talking about things buried on page 20, it is on the first page and in my opinion theres no excuse for that. Followup because you said its anemic when some of these platforms might be doing out there. Last year in this room we passed 60 pieces of legislation dealing with the drug crisis we have in this country. Fentanyl being one of them and you mentioned you can type in fentanyl and find it. Again, what were trying to do is make sure we dont have 72,000 deaths that we had in this country over a year ago and with over 43,000 associated with fentanyl. How do we go in to the platform is that weve got to enforce this because we dont want this stuff flowing in from china, how do we do this . This is what the conversation is. We dont repeal 230 but we make it a con responsibility, not a right. If your platform can be westernized in the way weve seen across the board from the litany of things i had in my remarks, surely something is not working. If i can find on page 1 and not just me, my colleagues on the table, investigative journalists. We know this content is fair and we have to ask the question if a reasonable person and find this content surely google can find it as well and now what is the responsibility and you said earlier to you should enforce your terms of service. If we dont want to talk about 30, lets talk about terms of service. The terms of service of most major platforms are good, its just that they dont do much to enforce them in a transparent way my timeis expired and i yield back. Chair now recognizes miss housekeeper five minutes. Thank you mister chairman. Miss oyama, you said in one of the comment it says you presented to us that without 230, i want to see if theres any hands that would go on that we should abandon 230. Has anybody said that . This is not the issue. This is a sensible conversation abouthow to make it better. Mister hoffman, you said, and i want to thank you for we had a productive meeting yesterday explaining to me what your organization does and how its unique but you also said in your testimony that section 230 is a unique american law. And so, yet when we talked yesterday you thought it was a good idea to put it into a trade agreement in mexico and canada. If its a unique american law, let me just say that i think trying to fit into the regulatory structure of other countries at this time is inappropriate. And i would like to just quote, i dont know if hes here from a letter that both chairman sloan and Ranking Member of walden wrote some time ago. To mister like kaiser that said we find it inappropriate for the United States to export language mirroring section 230 while such serious policy discussions are ongoing and thats whats happening right now were having a serious policy discussion, but i think what the chairman was trying to do and what i want to do is trying to figure out what do we really want to do to amend or change in some way. If the three of you have talked about the need for changes, let me start with what you want to see in 230. Id like to bring the statute back to its original purpose was to apply to Good Samaritans who were engaged in responsible and reasonable content moderation practices. We can, we have the language to change the statute that would condition, that were not going to treat a provider or user ofan interactive service. That engages in a reasonable content moderation practices. So it would keep immunity. Let me suggest if theres language, i think wed like to see suggestions. Miss peters, i think you pretty much scared us. As to what is happening and then , how we can make 230 responsive to those concerns. We would love to share some proposed language with you about 230 to protect better against organized crime and terror activity on the platforms. One of the things im concerned about that a lot of tech firms are involved in is when they detect Illicit Activity or it gets flagged to them by users, their response is to delete it and forget about it. Whatim concerned about is two things. Number one, essentially is destroying Critical Evidence of a crime. Its helping criminals to cover their tracks as opposed to a situation like what we have for thefinancial industry and even aspects of the transport industry. If they know that Illicit Activity is going on they have to share it with Law Enforcement and do it in a certain timeframe. I want to see the content removed but i dont want to see it deleted and i think that is an important distinction. Id like to see a world where the big tech firms work collaboratively with Civil Society and with Law Enforcement to root out some of these people im going to cut you off because my time is running out and i want to get to the doctor with the same thing so i welcome concretesuggestions. I agree with my colleague mister citron. I think 230 should be a privilege not a right. We should be worried about the small startups. If we start regulating now, the ecosystem will become even more monopolistic so we have to think about how we carveouts for small platforms can now compete where these companies did not have to deal with that regulatory pressure and the last thing i will say is the rules have to be clear, consistent and transparent. Thank you, i yield back. Chair recognizes miss Morris Rogers for fiveminutes. Section 230 intended to provide Online Platform with a shield from liability as well as a sword good faith efforts to filter law or otherwise address offensive content online. Professor citron, do you believe companies are using the swordenough and if not, why you think that is . We are seeing the dominant platforms, ive been working with facebook and twitter for about eight years so i would say the dominant platforms and focus on this at this point are engaging in what i would describe at a broad level as fairly reasonable content moderation practices. I think they could do far better on transparency, about what they mean by when they boarded speech, what do they mean by that . Whats the harm theywant to avoid . It could be more transparent about the processes that they use when they made decisions. To have more accountability. But what really worries me on the sort of renegade sites as well. Those who fully incitement is no moderation. Dating apps that have no ability to manage person is and have ip addresses and sometimes its the biggest of providers, not the small ones who know they have to legality happening on their platforms and do nothing about it. Why are they doing . Because of section 230 immunity though the dating writer comes to mind hosting impersonations of someones and the person was using writer to send thousands of men to dismantle. Writer earned 50 times from the individual being targeted , did nothing about it when they responded after getting a lawsuit, there sense is ability doesnt allow us to track ip addresses but writer is fairly dominant in the space. When the person went to cross, its a smaller dating site, the impersonator was posing as the individual sending men to his own and scrub and we can down the ip address and take care of it so the notion that smaller versus large by my lights is theres good practices, responsible practices and irresponsible harmful practices. Thank you for that. Mister hoffman and miss oyama, your policies prohibit illegal content or activities on your platforms. Regarding your terms of service, how do you monitor content on your platform to ensurethat it does not populate your policy. Maybe illstart with mister hoffman. In my Opening Statement i describe the three layers of moderation that we have our company moderation and our team, this is the group that both rights the policies and enforces the policies. Primarily the way they work is enforcing these policies that seal the looking for aberrational behavior, looking for nonproblematic sites or words. We participate in a cross industry sharing which allows us to find images for example exploit the children that are shared industrywide or fingerprints thereof area next though are our community moderators. These are the people who, these are users and following that the uses themselves. Those groups participate together in removing content inappropriate for their community and in violation of our policies. We have policies against hosting, policies are not very long but one of the points is no illegal content so no regulated goods. No drugs, no guns, nothing of that sort. Your speaking out in the find it, you get it offthe platform. 230 doesnt provide us the Liability Protection so we are not in the business of committing crimes or helping people commit crimes. Would be problematic for our business. We do our best make sure its not on the platform. Would you address that and just what you are doing if you find that illegal content . Across youtube we have clear content policies. We publish those online that we have videos give more examples in some specific ways to understand. Were able to detect of the 9 million videos that we removed from youtube in the last quarter 87 percent of those were detected first by machines automation is one very important way and then the second way is human reviewers so we have Community Planning where any user that he problematic content and follow what happens with that complaint. We have human reviewers that and were transparent and in explaining that. When it comes to criminal activityon the internet , ca2 30 has a complete carveout in the case of grindr, we have policies against harassment but in the case of grindr where there was criminal activity understanding is there a defendant in that case and theres a criminal case for harassment and stalking proceeding against him in certain cases, opioids again, controlled substance under criminal law theres a section that says controlled substances on the internet, will happen. Since his, thats a provision and incases like that where theres a Law Enforcement role , their correct legal process, we would work with Law Enforcement to provide information under due process or subpoena. Mister get, recognized for five minutes. I really want to thank this panel. Im a former constitutional lawyer though im always interested in the intersection between criminality and freespeech and in particular, professor citron, i was reading your written testimony which you confirm with miss schakowsky over how section 230 should be revised to continue to provide amendment protections but also return the statute to its original purpose is to act, Companies Act more responsibly, not less and that they , i want to talk during my line of question about Online Harassment because this is a real Sexual Harassment, this is a real issue that has just only increase and Antidefamation League reported that 24 percent of women and 63 percent of lgbt individuals have experienced Online Harassment because of their gender or sexual orientation. This is compared only 14 percent of men. And 37 percent of all americans of any background have experienced your Online Harassment includes actual harassment, stalking, physical threats or sustain harassment so i want to ask you professor, and also ask you this theatervery briefly , to talk to me about how section 230 facilitates illegal activity and do you think it undermines the value of those laws and if so, how. Let me say that in cases involving harassment, of course theres a perpetrator and the platform that enables it and most of the time the perpetrators are not pursued by Law Enforcement. I explored the fact that Law Enforcement, they dont understand the abuse. Dont know how to investigate it and in the case of grindr, there are 10 protective orders that were violated and new york has done nothing about it so its not truethat we can always find a perpetrator nor especially in the cases of stalking, harassment and threats. We see a severe under enforcement of law particularly when it comes to gendered harm. The site can be ordered to block the person from communicating with the other. Select even on section 230 platforms, ignore requests that this dive boat material. They have. Select your nodding your head professor. They do and they can especially if those protective orders are coming from the criminal law. I wanted to ask you doctor mcsherry, Sexual Harassment continues to be a significant problem on twitter. Other social medias and platforms as well. I know section 230s a critical tool that facilitates content moderation but is we have heard the testimony, a lot of the platforms are being aggressive enough to enforce the terms and conditions. So what i want to ask you, is what can we do to encourage platforms to be more aggressive in protecting consumers and addressing issues like harassment. I imagine this hearing will encourage monday of them. Hearings. I understand that. So actually think that monday of the platforms are pretty aggressive already in the policies. I agree with what monday have said here today, which is that it would be nice if they would start by clearly enforcing their actual terms of service. We share a concern about this. Often they are enforced very inconsistently. They can be very challenging for users. The concern that i have if we institute what i think is one proposal which is whenever you get a notice to have some duty to investigate, that could actually backfire for marginalized communities because one of the things that also happens is if you want to. [silence] someone online, one thing you might do is for the Service Provider with complaints about them. And that they end up being the ones who are silenced rather than the other way around. Doctor fleetwood is your view. Noticed two issues and have there. Twentytwo moderation in your risk over moderating or on poverty. I would argue is where way on moderating. We look at where we fall down and make mistakes and take down things that we should have. I weigh that against 45 million pieces of content just last year to make and child abuse material and terrorism and drugs. The weights are imbalanced. You sort of have to note the imbalance and we are going to make mistakes. We are making way more mate mistakes on the allowing of content. Thank you mr. Chairman. And thank you to for holding this very important hearing. Ive been an Informational Technology for most of my adult live. Social responsibility has been an issue that i have talked about a lot. In the absence of heavyhanded government, and regulating, i think the absence of regulations is what has allowed the internet and the social media platforms grow like they have. I hate to sound cliche us but that old line from the Jurassic Park movie, sometimes were more focused on what we can do and we dont think about what we should do. So i think thats what we are where we find ourselves. Some of this anyway. Some of our witnesses, accessibility of a global audience through internet platforms is being used for illegal and illicit purposes by terrorist organizations and even for the sale of opioids which continues to severely impact an Impact Communities consternation and particularly in rural areas like i live in in southeastern ohio. However they also provide an essential tool for legitimate communications in the free safe and opening exchange of ideas. This is become a vital component of modern society in todays global economy. I appreciate hearing from all of our witnesses is our subcommittees examine both of section 230 of the Communications Act is empowering internet platforms to effectively self regulate on this live touch friend work. So mr. Hoffman, in your testimony you discussed the ability of not only employees, but its users to self regulate and remove content that goes against ravitch. Its rules and community standards. Do you think other social media platforms for example facebook or youtube, have been able to successfully implement similar selfregulating functions indictments. If not, what makes reddit unique in their ability to self regulate. Thank you congressman. I am only familiar with the other platforms to the extent that you know. Which is to see that im not an expert. I do know theyre not sitting on the hands. Another making progress. Radix model is unique in the industry. And that we believe that the only thing that skills with users is user. Seven were talking about User Generated Content, i am sharing some of this burden with those people in the same way in our society here in the United States, there are monday unwritten rules about what is acceptable or not to see. The same thing on her platforms. And by allowing and empowering our users the communities to enforce those unwritten rules. It treats an overall of the ecosystem. In your testimony, you discussed this possibility of determining which content is allowed on platforms including balancing respect for diverse viewpoints and giving a platform for marginalized voices. What a system like reddick, uploads and downloads impact the visibility of diverse viewpoints like platforms like youtube, and dislikes on youtube, impact videos visibility. Thank you for the question. Is you have seen users can sums up thumbsup or thumbs down. Its one of monday signals. It certainly wouldnt be determinative of recommendation of a video on youtube. Though mostly before road lit. We really appreciate your. About responsible content moderation. I didnt want to make the point that on the piece about harassment and bullying, we did remove 35000 videos from youtube just in the last quarter we can do this because of this to be eight to 30. Whatever somebodys content is remove the may also be upset that there can be against the Service Provider or defamation for breach of contract. Service providers large and small are able to have these policies and implement procedures to identify bad content and take it down. This is because of the provisions of this cda 230. See michalak his mother questions that im going to submit the me some rice with h his. So i want to stay within my time. You will require me to stay within my time. In the absence of regulations is i mentioned in my opening remarks, that take social responsibility to much higher bar. I would suggest to the entire industry, of the internet and social media platforms to better get serious about this selfregulating or you are going to force congress to do something they might not want to have done. With that i yelled back. Gentlemen youll back. You very much mr. Chairman. Thank you to the witnesses for me here today. Lastly the Senate Intel Committee released a bipartisan russias use of social media. It shows that they use social media platforms. They influence the outcome of 2016 election. Conception 230 play what role can it play the platforms wont be used again. It is critically important to allow Services Like us to protect citizens and users against foreign interference in elections. Is a critical issue especially with elections like coming up, we found on google across our systems in the 2016 election partially due to the measures we been able to take and is removals, their only two accounts associated our systems. They had less than 5000 is all they spent. We continue to be extremely vigilant so we do publish a political and transparency report. Require that it is disclosed in a shop in a library. Do you feel that you know effective. We can always do more but on this issue, we are extremely focused and working with campaigns. Mr. Hoffman. In 2016, we found that we saw the same fake news and misinformation submitted to our platform is was on the others. The differences, on reddick and is largely rewrapped rejected by the community by the users long before it even came to our attention. One thing we are im going at our community is im going at is being skeptical and rejecting also source of things and questioning things for better or for worse. Between that and now, we become dramatically better at defending groups of accounts that are working in a coordinated and we collaborate with Law Enforcement. So basically everything we have learned in the past, and can see going forward, i think wearing a pretty im going position coming into the 2020 election. Doctor crane in your written testimony, disinformation campaign, designed to disrupt in elections. You mentioned theres more to platforms could be doing. About moderating content online. What more should they be doing about this issue now. One example, if you want to go we saw fake videos Speaker Pelosi make rounds. And the responses will be interesting. Facebook said we know its fake, normally not. Not in the business of la truth. That is not a technological problem that was a policy problem. That was not set entirely comedy it was meant to discredit the speaker. So i think fundamentally, we have to relook at the rules. In fact if you look at facebook his rules, it says you cannot post things that are misleading or fraudulent. There was a clear case where technology worked in the policies unambiguously simply failed the policy. Youtube his credit they actually took it down and twitters in the discredit, they didnt even respond to the issue. So in some cases it is a technological issue but more often than not we are civilly not enforcing the rules that are already in place. So thats a decision they made. Okay. What you think of that what he just said. Are two aspects of this. First specifically towards medics, we have policy against impersonation. A video like that can both be used to neglect people or service misinformation and also raises questions about rest of the things that we see and hear an important discussion so the context about both of around or a video like that stays up or down on reddick is really important. Those are difficult decisions. I will observe that we are injuring into a new era. We can manipulate videos. Which has historically been able to do. With photoshop. Company is. So i do think that not only do the platforms of responsibility, we is a society have to understand that the of materials for example which publication is critically important because there will come a time, no matter what any of my tech tears see that we will not be able to detect that. Specific, content that you met you youtube, we do have a policy against the practice in the mood remove things. But there is ongoing work that needs to be done. To be able to better identify the fakes. Of course, in communion sometimes, political context that can severely undermine democracy. We opened up where we are working with research is to develop technology to better detect when renate tia is manipulated. A lot more to see but you know how this is. I will yell back. Chair recognizes mr. King to achieve. Thank you for being here today. We very much appreciate it. Its interesting on the last line of questions. One of us things about democracy is our ability to have free speech and share opinions but this can also be something is a real threat. I think the chairman for yielding. I be safe to see that not every member of congress and i have a plan for what to do about section 230 of the Communications Decency act but i think we all agree that the hearing is warranted and we need to have a discussion about the origins and intent of the section both of the companies and enjoy these Liability Protections are operated in the manner attended. In a state, and generally appreciate the efforts certain platforms have made over the years to remove and block unlawful content. Also state that is clearly not enough. And that status quo is unacceptable. Its been frustrating for me in recent years that my image and variations of my name, and used by criminals to different people on social media and this goes ten years. Literally i think an approach in the 50s to hundreds given on the ones we just know about. These camps are increasingly pervasive and i not only brought it up in the hearing with and more zuckerberg last year i also wrote him again the summer to continue to press him to act more wildly to protect his users. Sources indicate that in 2018, people reported hundreds of millions of dollars lost to Online Scammers including hundred and 43 million for a moment his scams. Given what so monday people a month or, is become more and more important for platforms to verify user authenticity. So both to mr. Hoffman and ms. Miss llama, what your platforms do to verify the often to city of user accounts. You for the question. Again to parse my made it through. The first is on the stand himself. My understanding is probably referring to scams that target veterans in particular. We have a number of veterans communities on aisle. Our support shared state experiences, theyll like all of our communities they create their own rules. These communities have operated rules that prohibit fundraising in general. The community and the members of its communities know that they can be targeted by the sort of scam in particular. Thus a sort of nuance that we think is really important and highlight it however community model. Is a nonveteran, might not have had that same sort of intuition. Now in terms of what we know about her users, reddit its not different from our peers and that we dont require people to share their realworld identity with us. We do know where they register from what he said he is and maybe their email address but we dont force them to reveal the full name. Both of a gender and if this is important because i read it there discuss sensitive topics. In this very same pattern communities or for example, drug addiction communities or any sore parents who are struggling being a parent. These are not things of somebody would go on to like facebook for example see hey, i dont like my kid. I dont mean to cut you off but we need to get move on. Very sorry that that happened to you congressman. On youtube we have a policy against impersonation is. If you know never to channel that was impersonating you know user saw that, is a former they can go in and submit. An example of their government ideas but that was a result the channel being struck. Things show off across the labs, searches an index of the web, try to get relevant information to your users every single day of search. We suppressed 19 billion links. That are spam that could be scam. To defend the users the risk engine that can actually kick out fraudulent accounts before they enter. Thank you. Im not upset about the sites that are the worst congressman ever. Thats understandable i guess for some people. But we do have again in my case, somebody is an example, multiple cases from india using her entire live savings because she thought we were dating for a year. Not to mention all of the her name that she gave to this perpetrator at all of these other stories i think one of the biggest and most of our things is people need to be aware that if you have somebody over a period of a year of dating you never authenticating goff, its probably not real. Missed peters what is the risk for people not being able to trust other identities online. There are multiple risks of that. When i come back to key issue for us if it is alyssa, the site should be required to hanover data to lawenforcement to be proactively work with them. We heard a lot today from a gentleman from reddick about her efforts to better moderate from some of our members were able to go online and type in a search for by fentanyl online and came up with monday results. Never bite federal online. Bye for cheap that went out prescription. Those are fairly simple search terms. Not talking about it super high bar to get rid of that new platform. It is a is it too hard to have it automatically directed to a site i would advise you to get counseling for drug abuse. Im not trying to be the thought police. Were trying to protect people from organizing crime and terror activities. Audio back but i have a bunch more questions i will submit. Thank you. For the record i think i dont think hes not the worst member of congress. [laughter] i dont even think you are the very bottom. You are not a bad guy. The chair recognizes mr. Castor five minutes. Thank you chairman doyle for organizing this hearing. Thank you to all of our witnesses for being here today. I like to talk about the issue up to 30 in the context of this horrendous tragedy in wisconsin a few years ago. Where a man walked into a salon where his wife was working and shot her dead in front of their daughter and killed two others in that salon. And then he killed himself. This is the type of horrific tragedies that is all is it too common in america today. You mentioned and i think you must poke a little bit because you said that was all legal but it wasnt because of two days before the shooting, there was a temporary restraining order issued against that man. He went Online Shopping and arms. Com and two days after that, and the next day, he commenced those murders. What happened is arms list knows that they have domestic abusers shopping, they have felons, theyve got terrorists, these are all shopping for firearms yet they are allowed to proceed with this. Earlier this year, the wisconsin Supreme Court ruled that arms list is the man. Even though they know that they are perpetuating illegal content. They said, that arms list is immune because of section 230. It basically said it did not matter that arms list actually know or even attended that his website which facilitate illegal firearms sell to a dangerous person. Section 230 so granted immunity. In them speeders, you highlighted, were talking about child sexual abuse content, illegal drug sales, and has gone way is it too far. I appreciate that you will have posed some solutions for this. You highlighted a safe harbor. New Companies Using best efforts to moderate content, they would have some protection but how would this work in reality. With this then be left up to the courts and those dive boat liabilities. The kind of speak to the need for very clear standards coming out of congress i think. Yes, i would. Thank you so much for the question. How would we do this. It would be in the courts. It would be an initial motion to dismiss the company with then, was being sued, that question would be are you being reasonable large. Now with regard to anyone piece of content or activity. And it is true that it then would be a mechanism in federal court, have companies then explain what constitutes thanks. I think we can come up right now with some basic thresholds, what we think is reasonable content moderation practices. Or technological due process. Accountability, and is having a process clarity about what it is you prohibit but is going to have to be casebycase contact by contact. Because what is reasonable and response to a deep state and is going to be different from the kind of advice i would give the Facebook Twitter and others. What constitutes a threat and how one figures that out. Thinking about the testimony about what we do about there are certain things. Would be the Public Interest i believe that is it is explicit, that they dont or it wouldnt wind up is an issue of fact in a lawsuit. What you think doctor friedman. His illegal content online. Umbrellas should be a debatable question is our right. Another lawyer, im in a mathematician by turning back completely great with you. For example the years and we saw this we are employing, and Technology Companies wanted to be bottled up in the gray area. The conversations that we are trying to remove child abuse materials. Weapons of an 18 yearold when its not sexual explicit my made it through is there are complicated questions but theres clearly cut bad behavior clear cut bad behavior. There is also an issue with the number of moderators are being hired to go through this content through the publication called the merge had horrendous story of facebook and caught my attention because one of the places is entebbe florida, my district. Im going to submit followup questions about moderators and some standards for that practice and i encourage you to made it through is in the back. Thank you mr. Chairman. In my 23 years of being a member ive never had a chance to really address the same question the two different panels on the same day, so its kind of an interesting emergent. Upstairs were talking about underage use anybody and watching the product. So i was curious when we were in the Opening Statements here, someone and i apologize, someone mentioned to cases one was dismissed because they really did nothing and one who tried to be the im going actor, got slammed. I dont know about slammed but is the couple of heads, can you miss his term, can you address that first. You know shaking your head the most. Enthusiastically because those are the two cases that effectively that rise to section 230. What animates to get a run writing, we gotta do some thing about this. One basically says if you do nothing, not going to be punished for it but if you try and you moderate, actually that heightens your responsibility. No im going deed goes unpunished. Yes right. Thats why im here today in monday respects. To make my tie this into whats going on upstairs and someone uses the platform to encourage on aged vaping with unknown nicotine content, and the site then decides to clean it up, because of the way the law is written right now, this im going deed which we most agree that this is the im going deed, go punished. Nono. Thus why we have section 230. They are encouraged so long is they were doing, on section they can remove it. And they are im going samaritans. Right. That is a benefit of it. His fear, okay so in this debate that we heard earlier and opening comments for my colleagues, in the u. S. Ca debate that part of that would remove the protections of 230 and that it would fall back to a regime by which a im going deed person could get punished is that correct. We need to keep the 230 language out of the trade agreements. It is currently an issue great debate here in the United States is unfair with that in a trade make it impossible for her make it harder,. Document rock about what it passed is soon is possible that went out encumbered works. Im not a proponent of trying to delay this process but im just trying to work through this debate and also, upstairs to look those of us, we believe in legal products that have been, approved by the fda and concerned about a blackmarket operation that within use platforms illicitly to sell underaged kids. That would be how i would tie these two hearings together and again i think its pretty interesting. Monday of the facebook hearing a couple of years ago, i refer to a book called the future computer which talked about the ability of industry to set the standards. I do think that industry, we do this across the board and a lot both of it is engineering heating and air cooling equipment or we do have mistreatment just come together for the im going actors and safe here are our standards. The fear is this sector doesnt do that, the heavy hand of government will do it which i think would really cause a little more problems. Doctor krieger shaking your head. Even staying to the industry, you have to do better. If you dont, somebody is going to do it for you. See you do it on your terms odious. I agree were not the experts. Part of that book talks about fairness privacy transparency transparency liability accountability. I would encourage those of you who listening to help move in that direction on their own before the we do that but for them. I feel like my time. German deals the chair recognizes the chair for five minutes. Is very interesting testimony and jarring in some ways. Ms. Peters, your testimony particularly jarring. Have you seen in the on aesthetic offers of weapons of mass destruction online. I am not. We certainly have members of our allies that are tracking weapons activity. I think was more concerning to me in a way is the number of illegal groups designated groups to al qaeda that maintain webpages and link to the twitter and facebook pages on those. Fundraising campaigns. [inaudible conversation] there are monday platforms that allow for different groups and is inside those groups, at the epicenter of Illicit Activity. So it is hard for us to get aside of those. We actually run undercover operations to get inside of them. Mr. You talked about Tech Companies between the motivation and the amount of time online on the platforms on one hand on the other hand, content moderation. We do about that briefly. Weve been talking a lot about 230 thats important conversation but there is another judge avoid your and there is nothing. This interlining Business Model of Silicon Valley today its not to sell a product. You know the product. In some ways that is where a lot of the tension is coming from because of the metrics we use how monday users, and how long do they stand the platforms. You can see why that is fundamentally a tension with moving users and removing content. So the Business Model is also an issue in the way we deal with privacy of user data, is also an issue here because if the Business Model is monetizing your data, then i need to feed you information. There is a reason why we call it the rabbit hole effect on youtube. Theres a reason theres a reason why if you start watching certain types of videos of children or conspiracies or extremism, you are and more and more of that content down the rabbit hole. Sinners real tension there and it is the bottom line. Its not just ideological. Were talking about the underlying problems. Thank you. I think monday of these issues that we are discussing today, both of it is harassment, extremism, it is important to remember the positive and productive potential for the internet. On youtube, we have seen it gets better and we have seen counter messaging and we have a program called creators to change who are able to create really compelling content for youth. Thing is just im going to remember that 230 was born out of this committee. Longstanding policy and is relevant to Foreign Policy as well. And we would support his inclusion. In trade free markets who are responsible for the surplus the United States hasnt digital services. His critically important for us to moderate content and to prevent censorship of other more repressive genes abroad. Hard to restrain yourself to brief answers. But clearly some of these could be doing more within the current Legal Framework to address the problematic content. Each of you what can you do with todays tools for tomorrows content. Breast, the biggest challenge is evolving our policies to meet your new challenges. I believe all of our policies, a dozen times and will continue to do so into the future. For example, two recent ones for us were expanding our harassment policy and banning pornography. So undoubtedly there will be there will be new challenges in the future and be able to stay nimble and trust. 230 actually gives us the space to adapt to the source of new challenges. The nimble, ensuring that we do respond to changing threats. The landscape is going to change we can have a checklist right now. I would encourage companies to not only have policies but be clear about them. In a be accountable. Just quickly, the issue for me with a standard, is a litigator, that is terrifying. That means the practical matters especially for Small Business, a lot of litigations and courts try to figure out what counts is reasonable. Two questions, one of the crucial things i think we need if we want better moderation and practices, and we want users not to be treated just is products, is to incentivize alternative Business Models. To make sure that we clear space of the competition, when a given site is behaving badly such is writer, people of other places to go with other practices. And are encouraged to another site to encouraged to develop. And evolve. They will make market forces, and the can work and we need to let them work. Ms. Brooks this very important hearing thank you. After free, actually, set the record and made the reason i am asking these questions i am a former u. S. Attorney and i was very involved in internet crime. We did a lot of work from 2001 to 2007, and you are right, deep fake pornography was not a term at that time, two years ago. We certainly know that Law Enforcement has been challenged for now decades in dealing with pornography over the internet. And yet, i believe that we have to continue to do more to protect children and protect kids all around the globe. A concept or tool sort of dna, was developed along time ago to detect criminal online child photography on Means Nothing to detect that illegal activity. In it if the platforms or do anything about it so now we been dealing with is now for decades. This its not new and heavy now have new tools and so do is in a matter of tools or effort or how is it that it is still happening. I have to see that this is the source of incredible frustration. Photo dna was something that i was help developing. From an industry, the prize itself on rapid and Aggressive Development that is been no tools the last decade that is going beyond photo dna. That is pathetic. That is truly pathetic when we are talking about this kind of material. How does an industry that prides itself on innovation see were going to use ten yearold technology to combat some of the most gutwrenching heartbreaking content online. It is completely inexcusable. This its not a technological limitation. We are simply not putting the effort into developing and deploying jewels. Me to share that having watched some of these videos, it is something you never want to see and you can never get it out of your mind. Smack i agree. Cement i have curious, i wanted to respond and how is it that we are still at this place. Thank you for the question. I will see with google, that is not true at all. They never stopped working on prioritizing. We always do better but we are constantly adapting new technologies. We initiated one of the first ones which is called csi a match. Enables us to create digital finger prints of the imagery. Prevent it from ever being uploaded from youtube and also share it with others and mutual and we are sharing it with others in the industry with ngos, and it resulted in a seven increase and speed of which it content is able to identify. So its going to continue to be a priority but i just want to be clear about on the very top of our company, we need to be a safe secure place for parents and children. We will not stop working on this issue. Im very pleased to hear that there have been advances than in that your sharing them and that is critically important. However, i will see an Indian State Police captain who is actually testified before energy and commerce recently told me that one of the issues that Law Enforcement runs into woodworking with Internet Companies is an attitude that he calls minimally compliant. And he said that Internet Companies will frequently not preserve content that can be used for investigation is long enforcement makes the company is aware of the concern material and they will automatically flag at content. The Law Enforcement for review that went out actually checking to see if it is truly objectionable or not. Do any of you have thoughts specifically on his comment. He has been an expert. Dna if you have thoughts on how we balance this Law Enforcement critical need because they are saving children. All around the globe. Ms. Miss peters, that went out restricting honeys immunity from hosting concerning content. I just feel like if Companies Start getting fines for some sort of punitive damage, every time theres illicit content regarding see a lot less illicit content very quickly. It is illegal and real live should be ill legal posted online. Its a very simple approach that i think we could apply industrywide. Spec i have a question in particular because i asked and more zuckerberg this relative to terrorism terrorism in ices and recruitment. And now we need to be more concerned about isis. Understand that you have teams of people have taken down. How monday people on your team stop in. Dedicated to removing content that scale about 20 percent of our company. About a hundred people. More than 10000 people working on content moderation. Actually remove content. But how monday people on the team that actually do that work. Im happy to get back to you. With that i yelled back. I would like to introduce a letter for the record that went out objection. The chair recognizes the german from new york. Ms. Clark for five minutes. Make our chairman in our chairwoman Ranking Members for committing this joint subcommittee hearing today. For fostering a healthier internet to protect consumers. I introduced the first house bill went deep Fake Technology called the deep fake accountability act. I would regulate fake videos. These sites can be used to impersonate political candidates and create fake revenge forms, and theater, that very notion of what is real, your platforms are exactly where deep fakes were shared. One of the applications of section 230 on your deep fake policies. Thank you for the question. We released, but most of our peers around the same time, brought for habitation of deep fake pornography. We saw that is a new emerging threat that we wanted to get ahead of is quickly is possible. The challenge we face of courses that challenge you raised. Which is the increasing challenge of being able to detect what is real or not. This where we believe what ricks model actually chimes. By empowering our users in communities to adjudicate on every piece of content, they often highlight things that are suspicious. I just videos and images but also text and resources. And through i do believe very strongly that we is a society, not just the platform, but in addition to have to develop senses against this sort of manipulation. Its only going to increase. Thank you. On youtube, our overall policy is the policy against deceptive practices. But theres been defects where we identified that it was a deep fake and it was removed from the platform. Search and for youtube, servicing authoritative Accurate Information is core to our business. Course or longterm incentives. I would agree with what mr. Halpin said is that one of the things that we are doing, is investing deeply in the academic side to the Research Side and the Machine Learning side to open up data sites we know these are deep sites and get better at being able to identify what is being manipulated. We also do have a revenge policy for search. For users who are by that and we did also extend that to include synthetic images or defects in that area is it too. Very well. Can you discuss the applications on deep fake monitoring removal. Section 230 start of the activities that we have seen youtube and reddick engagement, are precisely the kind of activities that are proactive in the face of clear illegality moving quickly but the real problem isnt these folks at the table, there are now some labs showing that even in the ten biggest porn sites have deep fake videos and therefore size now the basically the Business Model is deep fake models and 90 provides them immunity. Subject does the current e immunities reflect unique nature of this threat. I dont think, section 230 is a device, at its best it is incentivized to nobleness that we are staying for some dominant platforms. The plain language is written, on 230 c1, it doesnt condition the immunity of being responsible. So you have these outliers that cause enormous harm. Because it can be that a search of your name, theres a deep fake video. And it is findable and people then, contact you and is terrifying for victims. So these outlier companies that held the Business Model. In section 230 is what they. 2. They assume so sad. One of the monday issues that has become an existential threat to Civil Society is the right of hate speech and propaganda on social media platforms. If you were removed, would platforms be liable for hosting distasteful speech what a change around moderating such speech. I think this is the really important area to show the power and importance of his 238. There are First Amendment restrictions on government regulations of speech. There is additional responsibility for Service Providers like us in the private sector can step up. We have a policy against this. Violence is prohibited. Hate speeches prohibited specific groups for attributes based on race and religion and veteran status and edge. The takedowns that we do every single corner is through automated lightning through machine burning or even reviewers, are lawful impossible because of 230 because we take down contact, someones content is being taken down. So they can regulate connectivity Service Provider are small. Think looking at the equities of the Small Business interest in this case would be really important as well because i think they would see that there even more deeply reliant on this flexibility in the space to identify the content and take it down that went out fear of an litigated litigation or legal risk or uncertainty. I go back mr. Chairman. Shields back in now mr. Wahlberg, you recognized for five minutes. Thank you. Appreciate the panel for being here. Todays hearing and the issues at hand and home for a lot of us is we have discussed here. Internet such an amazing tool. Spread about Great Innovation connecting millions of people in ways that never even thought of before. Truthfully, we look forward to what we will see in the future but these are issues we have to wrestle with earlier this year i was pleased to invite haley petoskey, for my district into the state of the union is my guest to highlight her im going work she is doing in my district and surrounding areas to help, by combat Cyber Bullying. It is very much comprehensive individual who understands so much is a young person that what is going on is having a real impact in high schools and colleges now is a result of her experience and trying to attempt to make some positive things out of it and she almost committed suicide. Thankfully it wasnt successful is a result of Cyber Bullying. She kind of the live on that. Mr. Huffman missed yama, what are your companies doing to address Cyber Bullying on your platforms. Thank you. Just two weeks ago we updated our policies around harassment. Its one of the think most complex at 40 watts challenges we face because it appears in monday ways. One of the big changes we made is to allow harassment, not just from the victim but from third parties. Basically some vocs issues of harassment reported to us and our team so we can investigate. This is the nationwide issue but particularly on a platform when people come to us, in times of need. For example, teenagers struggling with their own sexuality. Theres no place to turn. Maybe not the friends or family, so they can know platforms like ours to talk to others in difficult situations. Where people were having suicidal thoughts come to our platforms. It is our First Priority regardless of the law. To fully support lawmakers in this initiative to make sure that those people have safe experiences. We mayday number of changes and will continue to do so in the future. Jammin. Thank you for the question. I need to harassment and Cyber Bullying is prohibited. So we would use our policies to help us enforce. And to either through automated detection, human flagging, Community Fighting we would be able to identify the content takedown and rain last quarter we removed 35000 videos on the policy against harassment and bullying. I just want to echo mr. Hoffmans respective that the internet and content sharing is really valuable place and it can surf is a lifeline into a victim of fresh harassment or bullying and we see that all of the time. Someone may be isolated to be able to reach out across quarters to another state or to find another community has really created a lot of hope that we also want to continue to invest in that important educational practice. I want to hear that you both are wheeling to continue investing in helping us is we move forward in this area. Missed yama, ghouls google his ad network has come a long way unless years and will surf is the next potentially illegal activity. Demonstrates google has come a lot weight and identifying illegal activity. Given that google is able to identify such activity, why would he not just takedown the content question. That was for her yama. It is true that our ad system we do have a risk engine. We prohibit illegal content. Theres monday different policies and there is stricken more than 2 billion ads every year. For violating those policies illegal and beyond. To taking them down. Absolutely. Before theyre even able to have edge. Very squarely in line with our business interest. Want advertisers to feel that our network and that our platforms are safe. You want to know, our advertisers only want to be surfing im going ads and im going content. Final question commander said the google offers a feature to put a tag on work that would automatically take it down and upload it but they google charges a the for this. Can this technology be applied to other legal content and why does google offer this tool for free. I think that may be a misperception. We do have id which is our copyright Management System that is automated and we have partners across Music Industries and film and every leading publisher is part of it. It is part of our Partner Program so it is offered for free. Ashley doesnt cost the partners anything. Its a revenue generator or we spent 3 billion based on content id flames of copyrighted material that week they were able to take majority of the ad revenue associated with the content and spit back out to them. That system of being able to identify and detect algorithmic leak content, to then set controls both of it should be in the entertainment space or part absent monday or served in the case of violent extremism absolutely blocked, is something that powers much of youtube. Bailed back. Gentleman yields back. And mr. Liptak, you know recognized for five minutes. Thank you manager, i do want to thank chairman doyle and the two Ranking Members of the subcommittees. I want to thank the witnesses for your tenants as well. Its been very informative. Even if im not able to made it through all of the questions id like to be able to made it through. Is of the first timer news have examined how they can be innovation in human connection. Which we all enjoy when we are making those connections along with the are positive obviously. But also connector harm and criminality. I think everyone assembled here today is currently very much an expert in your field and i appreciate hearing from you all today. Is we consider Health Section 230 has been interpreted by the courts in its initial path and if any changes should be considering. I think theres a lot to consider. To discuss the full scope of 230 covers. Cyber bullying and hate speech both of a facebook or youtube oh elsewhere. Listen transaction of harmful substances or weapons and i think the question today is truthful first we must ask if content moderators are doing and if and second much asked both of congressional action is required to fix these challenges. This icon is been referred to throughout my some of you and some of us. But thats essentially the second question we are really facing today. After reviewing the testimony, you cemented we clearly have some differences of opinion on both of section 230 is where congress should be focusing and its resources and sort of began, unless everyone the same question. This is probably the easiest question and most difficult. Because it is escaping seeks seemingly vague. What is the difference between im going and bad content moderation look like. Well start with you most often. Thank you congressman for that feels philosophically impossible question. I think there is a couple of easy answers but have on everybody on the panel will agree with. Bad content moderation is ignoring the problem. That was the situation we ran three to 30. Sort of perverse incentives we were facing. There are monday forms of im going content moderation. What is important to us erratic is twofold. One empowering our youth and communities to set standards of this course in their communities and amongst themselves. I think this is the only truly scalable solution. And the second is what 230 provides us. Which is the ability to look deeply at a platform to investigate to use some finesse and nuance when we are addressing new challenges. What makes bad content or what was makes contact moderation. Whats the difference between im going and bad content moderation. Okay. You know of course, but it precedes the question of why we are here. What kinds of harms is to the table today. Why we should even try to talk about changes section 230. What is bad or incredibly troubling is when sites are entitled to it in entire Business Model of abuse and harm. On my lines, that is the worst of the wars and sites that induce and solicit legality and harm. Thats me is the most troubling. I have some answers for you but actually wanted to wait to do that. I submitted to you in my testimony. Thank you. Thank you for the question. I think its a great question. I think someone who supports the liberties online, the primary full stress i think im going content moderation is precise and transparent and careful. But we see far is it too often is in the name of intent moderation and making sure that is safer everybody, actually all kinds of valuable unlawful content is taken offline. Cemented in our but a just. To one example where we have an archive of videos attempting to document war atrocities. Those videos are often plagued is violating terms of service because of course they contain horrible material but the point is to actually support political conversations and it is very difficult for the sub providers to apparently tell the difference. If its illegal, invalid, and ought to be illegal online. Content moderation on to focus on illegal activity. I think theres been little investment in technology it would improve this for the platforms because precisely because of section 230 communities. I do realize im out of time. I would like to get your response from the final two witnesses in writing if i could place. Thank you so much. Then i yelled back. Gentleman yields back and i recognize mr. Carter for five minutes. Thanks all of you for being here. I know that you all understand how important this is. I hope that you and i believe you all take it seriously. Thank you for being here and thank you for participating in this. Miss peters on why start with you. I like to ask you and your testimony emma you out that there is clearly quite a pit of illegal conduct that the Online Platforms still are hosting for instance, illegal pharmacies. Where you can by pills that went out a prescription. Terrorists that are profiteering and often artifacts. And also, products from endangered species. How much effort do you feel like the platforms are putting in two containing this into stopping us . It depends on the part from and that is a good question and i like to respond with the question for you and the committee. When was the last time anybody saw a dick pic on facebook. They can keep genitalia off of these platforms then they can keep Drug Companies often these platforms in child abus sex abue half these issues. In the policy to allow video of nancy pelosi or the policy to allow pictures of human genitalia. I understand. Do you ever go to them and meet with them and express this to them . Absolutely. How are you receive . We are told the firm has intelligent people working on it creating a. I. And in a few years a. I. Will work. When we presented evidence of specific identifiable Crime Networks inter networks, weve been told they will get back to us and then they dont that has happened multiple times. Have you ever been told they dont want to meet with you . No, we usually get meetings or called. You feel like you have a Good Relationship and you feel like the effort is being put forward . I dont feel like the effort is being put forth. Thats where i struggle, i am doing my best to keep the federal government afte out of. I dont want a spiteful innovation. But at the same time, we cannot allow this to go on. This is a responsible. And if you do not do it then you are going to force us to do it for you and i dont want that to happen. Its clear as that. Lomas ms. Peters, you also mentioned in your testimony you are getting funding from the state department to map wildlife supply chain and thats when he discovered there was a large Retail Market for endangered species that exist on platforms like facebook and we chat. Have any of these platforms made a commitment to stop this and if they have is it working, is it getting any better . That the traffic example. The number of platforms have joined a coalition with Wildlife Fund and have taken a pledge to remove endangered species content and wildlife market from the platform by 2020. Im not aware that anything has changed and we have researchers going online wildlife market all the time. Im going to be fair and then going to let google im sorry i cannot see that far, im going to let you respond to that. Do you feel like you did everything you can . We can always do more, were always committed to doing more. I appreciate that. I know that. I dont need you to tell me that, i need you to tell me that we have a plan in place in six months and process. Let me tell you what were trying to doing, for wildlife the sale of endangered species prohibited some google ads and were part of the coalition that ms. Peters mentioned. On the National Epidemic you mention for opioid were hugely committed to helping and putting apart and combating epidemic. Theres an online component and an offline component. The online component of Research Shows less than 0. 5 of misuse of opioid originates on the internet and what we have done especially with Google Search is work for the fda, they can send us a warning letter that there is a search for the pharmacy and we will do without out of search. And then theres an offline component two. We work with the dea and prescription takeback day we feature these places and google maps on cbs. I invite you to do just that. I would like to sit down and talk to you further. But im going to give the opportunity because my stuff has gone already and they have google were searched for Illegal Drugs and it comes up. Youre going to tell me the same thing we are working and almost have it under control but is still coming up . I have a slightly different answer. First of all, it is against the rules to have control of the goods on apart from. It is also illegal, 230 does not give us protection against criminal liability. We do see content like that on apart from. If you went to any Technology Service with a search bar including your own emails and typed in by adderall im sure you would find pit in your folder. That has come up today is stand first, remove by filters and there is a lag between something being submitted and something being removed, naturally thats how the system works. That said, we do take this issue seriously so technology continues to improve along these lines. That is exactly the sort of ability that 230 gives us, the ability to look for the content and remove it. To the extent that you or your staff have found this content specifically and it is still on apart from, we be happy to follow up later because it should not be. If you like apparent pleading with her child, please dont make me have to do this. Thank you, madam chair are you back. The gentleman yields back i recommend congressman kelly for five minutes. Thank you, madam chair and thank you for holding this important hearing on section 230 in a more consumer friendly internet. It was for companies to moderate content under the Good Samaritan provision in this law seems to be widely misapplied. The Good Samaritan provision in section 230 was intended in good faith to restrict access of availability of materials that the provider or user considers to make it reliable to trafficking. Since past his crusade the law to be too ambiguous. Into my work on this committee i sure the house take accountable to caucus. And stakeholders to protect family users in an accountable manner allowing innovators to integrate. To date as we want to foster more consumer from the internet it is my hope that our discussion will set the stands of responsible effect in a bounce way. Professor, and your testimony you discussed giving platforms and unity for liability as they can show the content moderation practices are reasonable. As the chairman reference, how should companies no where the line is or if theyre doing enough where is that line . And the genius reasonableness, the matter depends by context. There is sympathy presumptions by default of what would constitute reasonable content moderation practices and that includes having them. In fact they absently do not engage in moderation having encouraged. But i think for the last ten years there is a baseline set of policies that we have seen that our best practices, naturally that will change depending on the challenge so we will have to have different purchases and new evolving challenges and thats why a reasonable approach which preserves the liability shield but in exchange for those efforts. Would you agree that any change be made that we have to ensure it doesnt for the ambiguity. What was disappointing to someone who helped the offices work on the language if we included the language knowing the facilitate in the moderator dilemma, or to be overly aggressive in my biggest disappointment was unfortunately how it came out. Because we do see ourselves to those initial cases and racine way overly aggressive responses to sexual expression online which is a shame and receiver doing nothing. I hope we do not do that. Thank you. The way people communicate is rapidly an information can start on one and jump to another and go viral very quickly. The 2016 election showcase has false information can spread and how effective it can be to motivators deterred different populations. Or offensive content issued in groups and out to a wider audience. Ms. Peters, what you believe is the responsibility of Tech Companies to monitor and proactively remove content that is rapidly spreading before being flagged by users . I believe Companies Need to moderate and remove content when it concerns and clearly illegal activity. If it is a legal in real life it ought to be illegal hosted online, drug trafficking, human trafficking, wildlife tracking being, Serious Organized Crime and designated characters should not be given space to operate on apart forms. I also think to 30 needs to be revised to provide more opportunities for civil and state Law Enforcement state and local Law Enforcement to have the legal tools to respond to Illicit Activity. Thats one of the reasons this was passed. What steps are you taking beyond Machine Learning to stop the spread of machin extremism. And its being spread widely, other flags being popped up to share 10000 or 100,000 times. They stand for the question, on youtube we are using algorithms once content is identified and removed our technology prevents it from being uploaded. But to an important point working across platforms and industry corporation, a unit example is a global counterterrorism one of the founding members in many of the leading players are part of that and one of the things that we saw during the christchurch shooting is how quickly the content can spread and we were grateful to see that last week some of the crisis protocols be put into place and kicked in because there was a shooting in germany and a piece of content that appeared on twitch and the companies were able to engage in the crisis protocol in their was spread across the countries and that enabled all of us to block it. I know i am out of time. The gentlelady yields back. Thank you, madam chair i appreciate it very much. My first question is for doctor mcsherry, a yes or no, i understand the past eff has argued for including language. Reporter the legislation and trade deals exclusively for the purpose of banking language to protect such a statute domestically. Do you see the intent of including 230 like language and trade agreements is to ensure that we may not revisit the statute . No. Okay, all right. Thank you very much. What i would like to do, i would like to ask the eff, the blog post from jane returning third 2018 by jeremy malcolm, be entered into the record. Without objection still ordered. Thank you, madam chair i appreciate it. The next question is for mr. Hoffman and ms. In april 2018 i . Zuckerberg about how soon illegal open we would add would be removed from the website. His answer was, the ads would be reviewed when they were flagged by users as being illegal or inappropriate. This of course is the standard answer in the social media space. However, mr. Zuckerberg also said at the time the Industry Needs to, build tools that proactively go out and identify as for opioids before people even have to flog them for us to review. That and the quote. This was significantly cut down, the time in the legal add would be on their website. Again, mr. Hoffman, it has been a year end a half, this is an epidemic and people are dying. Im sure you agree. Has the industry be actively working on Artificial Intelligence flogging standards that can automatically identify illegal ads and what is the status of this technology and when can we expect implementation if they have been working on it. Whoever would like to go first. Thank you, congressman. It is a little bit different than our peers, oliver ads go through strict human review process. Making sure not only are they on the right side of the content policy which prohibits the buying and selling of controlled substances but also much more strict add policy which has a much higher bar to cross because we do not want ads that cause any controversy on our platform. We have to be proactive as far as thats concerned, mr. Zuckerberg indicated that the case. In each kid, people are dying and we cannot stand by and have this happen and have access in most cases opioids and drugs, there are different types of drugs but would you like to comment . We certainly agree with your comment about the need for proactive effort. On google ads have something called the engine that identifies if an ad is bad when it is coming in and we can kick it out. Last june 2018 we kicked out 3 billion ads in our system for violating policy. And any prescription that would show up in an ad that is also independently verified by independent group like legit script that would be needed verified by them. In a specific case of opioid, there controlled substance under federal law and theres a lot of important work that we have done with the dea, fda, even with pharmacies like cvs offline to help them promote like take back your drugs they were people can take opioids in and drop them off if they are not misused later on. One of the things weve seen the move fast majority, more than 90 of opioid misuse happens, when a doctor prescribing or Family Member or friend in using technology to educate and inform that might be equally important to the other work were doing. How about anyone else on the panel, where they like to comment, is the industry doing enough . I dont think they are doing enough, theres an enormous amount of drug sales taking place on google groups, instagram, facebook groups, the groups on these platforms are that the center. This is why industry has to be monitoring this. If you leave this up to users to flog it, and the inside of her secret group it will not happen. They know what users are getting up to and monitoring all of us all the time so they can sell less, they can figure this out. Can i add, there are two issues, the ads but also the native content. You heard ms. Peter say that she went and searched on reddit and that content is there even if its not in the ad. If there is two places you have to worry about is not just the ads. Thank you i yield back. The gentleman yields back and now recall on the chairman of our committee for five minutes. Thank you, madam chair, i wanted to start with mr. Llama. Your written testimony you discussed Community Guidelines for hate speech in unconcerned about news reports that hate speech is on the rise, how to section 230 incentivize platforms to moderate such speech and does section 230 and cenobite platforms to take a handsoff approach to removing the hate speech if you will . Thank you for the question. On the category of hate speech, we have a very clear policy with violence or speech that is hateful against groups with specific attributes. They are religion, sex, age, disabilities, the can be detected by machines which is the case of 87 , community partners, individual users in all of those that we take the last quarter we saw an increase in the amount of content that her machines were able to find and remove those removals are vitally dependent on the protection and cpa 230 to give Service Providers the ability to moderate content and take it down. They may have other legal claims into 30 what enables not only google or youtube but any site with humor comments and User Generated Content, any site on the internet large or small to be able to moderate the content. So i think we would encourage congress to not harm the good actors and innocent actors in an effort to go after a truly bad criminal actor which is criminalized fully exempted from the 230. And they should be penalized in Law Enforcement will play a big Important Role in bringing them down as they did backstage taken down in civil cases where there is platform liability for them to break the law. Thank you. In your written testimony you state the internet has led to proliferation of national terrorism, as you may know both criminal and civil liability with providing Material Support for terrorism. Understanding section 230 does not apply to federal criminal law, have Companies Used section 230 to shield themselves from civil liability for allowing the platforms to be used as propaganda in the recruitment through tears. There are ongoing cases and there have been several cases where platforms have been accused of violating civil laws for hosting content on their platform. They invoke section 230 successfully. I think if you looked at the fact of the love of the cases its quite appropriate. The reality is its very difficult for platform to tell in advance between content that is talking to protect Political Committee case and in content the steps over line. These cases are hard and complicated and significant result in section 230 also creates a space in which because of the additional protection that it provides a creates a space for Service Providers when they choose to to moderate. And enforce their own policy. Do you have any thoughts on how this should be addressed from a technological perspective . I want to start by saying when here about the moderation happening, weve heard from google, reddit, you should understand it is under pressure from advertisers, and compression capital and pressure in the io and pressure from the press, when theres bad news, then we start getting serious. For years weve struggled we hit a hard wall. You started putting pressure, advertising put pressure and we started getting responsive. Thats exactly what this conversation is about, what is the underlying motivating factor. We will do everything that is not working, the pressure has to come from other avenues and putting pressure by modest changes is the rejection and i agree, if these are good actors the nation encourage the change and help us clean up and deal with the problems that we are dealing. I have been in this fight over a decade and its a very consistent pattern. You minimize the extent and denied technology exist and they get pressure and we start making changes. We should just skip and a cover that there we can do better and lets start doing better. Thank you. I recognize for five minutes congressman jim forte. Thank you, madam chair and thank you for being here. About 20 years ago i harvest the part of an event to launch a business to improved Customer Service and the company was called right now technology and foremost for bedroom and a home we grew the business to be one of the largest employers and we had about 500 highway jobs in the platform we created had about 8 million unique visitors per day and understand how important section 230 can be for Small Business. This important liability shield has gotten mixed up with complaints about the viewpoint discrimination. I want to fight one case, in march of this year the Rocky Mountains Oak Foundation reached out to my office because google had denied one of the advertisement, the foundation did for the had done many times they tried to use paid advertising on Google Network and promote a short video of a father hunting with his daughter. This time the foundation received an email from google and i quote, any promotions about hunting practices even when the intended as a healthy method of population control or conservation is considered animal cruelty and deemed inappropriately to be shown on a network. The day i heard about this i sent a letter to google and you are very responsive but the initial position taken was absurd. Hunting is a way of life in montana and many parts of the country. Im very thankful that you work quickly to reverse that but i remain very concerned about googles effort to stifle the promotion of rocky Mount Foundation and how they were treated. I wonder if other similar groups have faced similar efforts to shut down their canvassing. We dont know how many hunting ads google has blocked in the last five years. In my letter i invited google ceo to meet with leaders of our recreation businesses in montana and i have not heard anything back and we would extend the invitation, i think frankly would help google to get on the Silicon Valley encounter monta montana, sit down with your customers and hear from them directly about the things that are important to them. Id be happy to host the visit and would be happy to meet. I think its important to understand the work that these groups do for further conservation and help species thrive and as an avid hunter i know many businesses in montana focus on hunting and fishing. And i worry they would be denied the opportunity to advertise at one of the largest Online Platforms you have built your credit. I also worry and order on her older versions could hurt Small Businesses and stifle containers rapidly growing hightech sector. How can we walk the line between protecting Small Business and innovation versus overburdening. I think it to be very careful because right now we have near monopolies in the Technology Sector. If we regulate now Small Companies are not going to be able to compete in our ways of creating carveouts as they talk about regulations, they are talking about small platforms and i think we want to tread lightly. Ms. Peters made the point that we want to inspire competition for better Business Models and allow these companies. I think there are mechanisms will does have to think carefully. We had a discussion about the efforts to get criminal activity off the network. Of the following, how do we ensure that content moderation does not become censorship and the violation of the First Amendment. The way we thought about content moderation is a collaboration between humans and computers. What the computers are set to do is to something over and over and over again. The content moderation works in human moderator say this is a child with its sexually persistent and its very targeted to that piece of content and voter dna that we decade a year ago. Thats one in 60 billion. If you have to be operating at a very high scale in the humans moderators, you heard from google 10000 moderators, 500 hours of video uploaded a minute. That is not enough moderators. You can do the arithmetic yourself. Those moderators will look hours and hours of video so we have to be working on moderation. The gentleman yields back. And now i recognize congresswoman rochester is next thank you, madam chairwoman into the chairman of the members. Thank you for holding this hearing. Many are seeking to more fully understand how affectio section0 decent the act can work well in the everchanging virtual and technological world, this hearing is really significant. Also as mr. Huffman said, i think it applies to all of us we must constantly be evolving her policies to face the new challenges while also balancing civil liberty. We have a really important balance. So my question is surrounded about block content in moderation, i want to start off by saying the utilization of mercy under machine money to filter through your content posted on websites as large as youtube provides an intellect logical solution to increasing the amount content to moderate. However, as we become more reliant on algorithms we are increasingly finding blind spots and gaps that may be difficult to reach was simply more and better code. I think theres a real concern that groups are facing prejudice and discrimination and they will be further marginalized in center. As i thought about this i even thought about the Africanamerican Community in the 2016 election. Doctor friede, can you describe the challenges when moderation by algorithm including possible bias. I think youre absolutely right, when we automate at the scale we are going to problems. We have already seen that and we know the face recognition does much worse than women and people of color. The problem with automatic moderation is that it does not work past the scale. If your algorithm is 99 accurate which is very good you are still making one and 100 mistakes, that is tens of millions of mistakes today you make it the skill of the internet. So the underlying idea that we can fully automate not to take on the responsibility in hiring human moderators simply does not work. I fear we have moved too far, ghettos time to find the algorithms because we dont want to hire the moderators because of expense. We know that will not work in the next year or two years. It also assumes an adversary it is not adopting. We know the adversary is going to adopt so we know all Machine Learning our books called adversarial tax. You can have small amounts to the information incompletely fool the systems. I want to ask a quick question. Both of you talked about the number of human moderator job available and i know we have had many hearings on challenges of diversity in the tech field and im assuming mr. Hudson user from the user perspective in terms of moderator or people that you hire in the 10000 or so that you mention these are people that you hire or users . Its about one digit employees out of 500 and millions participate as well. The 10000 that i mentioned the full summit employees and we also virtualize with special users. I know in the interest of time, could you provide us with information on diversity of your moderators. That is one of lif my questions. Im assuming its going to be a challenge to find diverse population of individuals to this rule what you doing in the vein. If we can have a follower. My last question is for the panel, what should the federal government be doing to help in the space because im really concerned about capacity to do it and do it well. If anybody has any suggested recommendation. I think this conversation is helping and you gonna scare the be jesus out of the Technology Sector. I am out of time but thank you so much to all of you for your work. The woman yields back and now last, but not least represented de soto wer. Thank you madame chairwoman. First of all thank you for being here. If youre in the home stretch, its amazing were here today when we think about how far the internet has progressed, one of the greatest preventions in human interest. In providing the knowledge at our fingertips, it is just incredible. In section 230 has been a big part of it, creating the safe harbor and created innovation but is also created a breeding ground for harassment and efficient and also bringing ground for white supremacist. We have the wonderful and then all the terrible things on the other side. It spreads faster than the speed of light. In the internet seems to go at a snails pace. Thats one thing that i hear from my constituents. I want to start with basics so i know everybodys opinion. Who do you think should be the primary importer with the choices being ftc, the ftc. Those are my only through option,. In the United States a migh mighty. You think should be the competency. Im going to take your best option, the company is actually cardinal principles for it at the end of the day users should control their internet experience. You need to have many more tools to make the hospital. I think that is a ridiculous argument. The vast majority of people have study crimes. Most people are good, a small percentage statistically and in the community commits a crime. You have to control for. I think moderation has always been an approach and i want to point out the courts and the ftc to have jurisdiction and as you know the ftc will not Like Companies come over and course are always a happ happy place. We all have a responsibility. If we were to tighten up rules on the course, it would be great to hear junk and relief, do you think that would be enough and whether or not there should be fees. Im not a policymaker or lawyer, technology maker. Im not the one that should answer the question with due respect. For the relief in the courts would be enough to change certain behaviors . Just that courts have they empower and they did want to echo the Small Businesses where they say the framework has created certainty and its essential for the moderation and economic liability. I would shudder to see what would happen if we were smaller than being on the receiving end. If you are nodding quite a bit, is this something we should be looking at . As you say all i see is the First Amendment and prior restraints. We need to be careful with the kind of red libertie remedies. And if people act was unreasonably and recklessly i think the arete should be invaluable. Lastly i want to talk about 230 and incorporating art trade deals. And from orlando i know you talked about the issues including 230 in trade deals, how about a regional like ours has tied Congress Hands from reforming the build on the line. And that is why weve decided to have inside the trade deal. There are 90 pages in existing with the trade agreements. I think your Family Member can also do that too. If we do that here that could affect the trade deals and Less Congress regularly has hearings on pharmaceutical labor and climate, there is nothing in the trade agreement of u. S. Law to create a u. S. Framework when countries like china and russia are developing their own for the internet and the current usmca or japan, that would limit your ability to look at 230 and describe its lock. The gentleman yields back. And that concludes our appearance for questioning. I take your unanimous consent to put into the record a letter from creative future with attachment in the letter from American Hotel and lodging association and a letter from Consumer Technology association, a letter from travel Technology Association a white paper from airbnb, holly can pull i and con association and the letter from representative in the letter in support of the plan act and the i to pollution in the ftc from representative for terry in the letter from tax freedom, a letter from the internet association, a letter with the wikipedia foundation, a letter from the Motion Picture association, a article from the verge called searching for health, a statement from her series. Let me think are witnesses. I think this is a really useful hearing and i think those of you who have suggestions, more concrete than kim up today. Our committee would appreciate it very much and im sure the joint committee would appreciate that as well, i want to thank all of you so much for your thoughtful presentation and for the written testimony which often went way beyond what we were able to hear today and i want to remind members for Committee Rules, they have ten Business Days to submit additional questions for witnesses who have appeared. I want to ask witnesses to please respond promptly to any such questions that you may receive. At this time the committee, committees are adjourned. Thank you

© 2024 Vimarsana

comparemela.com © 2020. All Rights Reserved.