Transcripts For CSPAN2 House Energy Commerce Subcommittee Hearing On The Internet Consumer... 20240713

Card image cap



companies and internet users reposting third party material as liability. recorded earlier this week this is just over three hours reposting third party material as liability. recorded earlier this week this is just over three hours. >> the committee will now come to order the chair recognizes himself or five minutes for opening statement. on my content on - - online content moderation is the negine experience we know today whether looking up restaurant reviews or catching up on snl on youtube are checking in on a friend or a loved one on social media these are all experiences we have come to know and rely on. and the platforms we go to to do these things have been enabled by user generated content as well as the ability of these companies to moderate that content and create communities. section 230 of communications decency actit has enabled that ecosystem to evolve by giving only companies the ability to moderatec content without equating them to the publisher or speaker of the content, we have a creation of massive online of communities of people to come together and interact. today this committee will be examining as the section 230 is enabled good and bad. thank you to the witnesses for appearing before us each of you represents important perspectives related to content moderation in the online ecosystem. many of you have complex concerns in your testimony and i agree this is a complex issue. i know some of you have argued congress should amend to 32 address online criminal activity misinformation and hate speech. i agree these are serious issues. like too many other communitiesuc my hometown of pittsburgh has seen what unchecked hate can lead to we had the most deadly attack on jewish americans in our history. the shooter did so after posting anti-semitic remarks on a fringe site before finally posting he was going in. a similar act in new zealand and streamed the act on social media and why they quell to spread that they didn't move fast enough and those algorithms with celebrities still please go viral and sports highlights help to attack.the in 2016 we saw similar issues those they used to use that problem form againsten us to disseminate information input to instill distrust of our leaders and institutions. clearly we all need to do better and i would strongly encourage the witnesses before tus that represent these online platforms and other major platforms to step up. the other witnesses on the panel bring up serious concerns with the kind of content available on thest platforms and the impact that society.s having on and some of those very disturbing you must do more to address those concerns. that being said section 230 doesn't protect the biggest platforms but it enables common to sections on individualpl blogs, people to leave honest and open reviews and discussion about controversial topicss which enriches our lives in democracy. the individual to have their voice heard that cannot be understated. posting content truth to power has created political movements that change the world we live in. we all need to recognize the incredible power it has for good as well as the risk when misused. i look forward to our discussion today i yield the balance of my time. >> thank you. april 2018 mark zuckerberg came before congress to say it was my mistake. about facebook and russia during the 2016 presidential elections. passport 555 days i do not feel he learned from his mistakes recent developments confirm what we have feared they have continued to allow ads to make the online ecosystem fertile ground for election interference 2022. this information is not difficult with those should be easy if facebook does not want to adopt the truth of political speech than they should get out of the game. we need this discussion now more than ever and i yelled back. >> now the ranking member for the subcommittee for five minutes for his opening statement. >>. >> with content moderation as the communications decency act. this is a continuous discussion from last session how congress has accountability from the internet today. said to represent stakeholders that are more closely tied to section 230 from large and small companies as well as academic and researchers. as employer i'm not advocating but i am for congress to say it is a slippery slope with a death by a thousand cuts with the internet industry that is a long column. so whether we discussed if there should be nuanced modifications we should enderstand how we got to this point looking at section 230 in context at the time the telecom act of 1996 included other prohibitions that were unobjectionable with lewd content for those provisions that were seen them seen were struck down by the supreme c court the section 230 provisions remained. notably was intended to encourage platforms to bring in other services proactively to take down content. we want to encourage people like prodigy and compuserve and america online and microsoft do everything for us to control at the front door of our house what our children see. it is unfortunate the court said it took such a broad interpretation of that platform having to demonstrate that we are doing everything possible set of encouraging those provisions with the numerous platforms and then use precision all tools to avoid litigation without having to take any responsibility. not only are good samaritan selective to take down harmful activity but section 230 is interpreted so broadly that .any can skate by that's not to say those that are afforded by congress many of those platforms made billions and that's how they would be accounted for annually but oftento times this is the exception not the rule. today we will get deeper into the platforms to look at the root content and those tools provided by sectionwe 230 for their own terms of service roder either authority we should encouraging these to continue mister chairman thank you for holding this hearing so we can have an open discussion on section 230 and reevaluate the lobby must ensure the platforms to be reasonably accountable on the platforms without drastically affecting the innovative startups i yelled back. >> i should have mentioned this is a joint hearing between our committee and the committee on consumer protection and commerce and i would like to recognize the chair of that committee for fivett minutes. >> thank you mister chair and all the panelist for being here today that has improved our lives in many ways to enable americans to more actively participate in society and education and commerce. section 230 of the communications decency act is at the heart of the united states internet policy and that this law allowing the internet to grow into what it is today. it is intended to incur platforms with user generated content to remove offensive and dangerous content. the internet has come along way since ally was first enacted. the amount of sophistication has increased exponentially. unfortunately the number of americans who report experiencing extremism harassment which is sexual harassment stalking him bullying and violence over the last two years 37 percent of users say they have experienced that this year. likewise extremism and hate speech and election interference and other problematic content is proliferating. the content is problematic and actually the cause of real harm that multibillion-dollar companies like facebook or google and twitter can't or won't. if this weren't enough cause for concern more for-profit businesses use section 230 as a liability shield that they have nothing to do with third-party content of moderation policy. in a recent washington post article executives seem to be opening the door to claimsar to claim fast communities from labor and criminal and local traffic liabilities based on section 230. this would represent a major unraveling of a few hundred years of social contract contracts, community governance and congressional intent. also at issue is the ftc section five authority with the unfair and deceptive practices. to generate content. with the service violation to be precluded from thee immunity. so talk about into the trade agreements it seems to me that we have already seen that now and there is a push to include that now with the mexico canada us trade opening up. so they don't accommodate what the united states has done about 230. we are having a discussion right now an important conversation it is an appropriate right now to insert this liability protection into trade agreements and as a member of the working group to negotiate that agreement we don't need any adjustment to 230. all the issues we are talking about today indicate there may be a larger problem that 230 is no longer achieving the goal in encouraging platforms to protect their users andac today to discuss holistic solutions not talking about eliminating liberty may have a new look at that into the world right now and to be made even better for consumers now the ranking member of the committee your recognized. >> good morning. welcome to the joint hearing with online content management because the republican leader of the subcommittee my was to protect consumers and preserve the small businesses and startups to innovate. in that spirit we are going to discuss online platform section 230 of the communications decency act those that pursued for content based on the website by users one seek to moderate content the other did not decide in the cases the court found those that did not make contentot decisions was immune from liability but those that moderated content was not so after these decisions that congress created section 230 it is intended to protect interactive computer services from what users post while also allowing them to moderate content that is illicit or illegal to those two ally one - - for those frivolous bad actors to make a quick buck. section 230 is also largely misunderstood. they never wanted them to be a neutral platform but to moderate content. the liability protection also extended to have good faith efforts to moderate material that is obscene and lewd and obsessively violent so to be a balance to the section of 230 all internet companies to flourish online to keep the internet clear to empower these platforms to act in clean up their own site the internet revolutionize the freedom of speech for every american to have their voice heard with the infinite amount of information at their fingertips. that has provided a flat platform for anybody to write the op-ed and wikipedia provides free in-depth information almost any topic you can imagine through mostly user generated and moderated content. companies that started in dorm rooms and garages are now global powerhouses. we take great pride to be the global leader but have they matured. is difficult to see disgusting or illegal content to freeze support free speech we benefit from open dialogue and free expression online there is a call for big government to mandate free speech so those that others have expressed i do not believe they are consistent they would with broadcast regulation during the 1980s and i strongly caution against advocating for similar doctrine f online. should not be the fcc, ftc or any government agency's f job to moderate free-speech online. so with a section 230 with constructive discussions on use of content moderation. this is a very important question to explore today with everybody on the panel. how do we ensure companies with enough resources are earning the liability protectionry to read the sites of harmful content. i am not forgetting section 230 it is essential in that section 230 for other reasons for unintended consequences and the ability for businesses to provide new and innovative services and at the same time it is clear we reach that point where it is incumbent upon us as policymakers to have a serious and thoughtful discussion about having the balance on section 230 i yield back the chairman of the committee your recognized your opening statement. >> the internet is one of the single greatest innovations promoting free expression and community and also fosters economic opportunity with trillions of dollars exchanged online every year. in the communications decency act and then to moderate the site without excessive risk of litigation and that has been an incredible success since it became law that has been more complex and sophisticated with the global internet with the users are less than 1 percent of the population. rnly one out of four americans go online every day so all of us are online almost every hour earlier this year with four.39 billion users worldwide and now are 230 million smartphones to give access to online platforms. the social political economic fabric to enhance the telecommunications act and with that complexity we have also seen the darker side of the internet online radicalization has spread leading to mass shootings in international terrorist using the internet to deliver recruits and platforms used for the illegal sale of drugs governments and fraudsters that disseminate new technology like deep space to so civil distressed under democratic elections and there are constant attacks against women and people of color and other minority groups and most despicable of all is the sexual exploitation of children online and in 1998 the material depicting the use online and while platforms are now better at detecting recent reporting shows law enforcement officers are overwhelmed by the crisis. these are issues that we cannot ignore and tuck - - tech companies to address the serious problems that each of these issues demonstrate how online content moderation does valuesy true to the and has not kept pace with the increasing importance of a global internet. as policymakers i'm sure we all have ideas how we might tackle the symptoms of content moderation while protecting free speech but we must seek to fully understand the breadth and depth of their internet to be careful and bipartisan in our approach. so i was disappointed that ambassador lightrt heiser refused to testify today. the us has included language similar to section 230 in the mexico canada agreement and the us-japan trade agreement ranking member and i wrote to the ambassador raising concerns about why the us has included the language and trade deals as we debate them across the natio nation. i was hoping to hear his perspective on why he believes that was appropriate. including provisions that are controversial to democrats and republicans is not the way to get support from congress but hopefully the ambassador will be more responsive to request in the future and with that mister chairman i will yield back to the chair would like to remind members pursuant to committee rules all opening statements shall be made part of the recor record. i apologize the chair yields to the ranking member my good friend. [laughter] >> times have changed. >> welcome to our witnesses today thank you for being here. at the outset we have another subcommittee meeting upstairs so i will bounce in between so i look forward to hear your comments. without au question we are pleased to have you here siu had significant hearings to jumpstart the state of online protection with that legal basis with that ecosystem is the future of content moderation to now determine what we see online and today we undertake a deeper route of the communications decency act portion of the 1986 telecommunications act. so the chairman and i raise the issue to marry section 230 and trade agreements in a letter to the us trade representative robert light heiser. we expressed concerns of the policy to take out of the context and that in the future the trade representative should consult our committee to advance negotiable and on - - negotiate on these very issues. unfortunately we have learned the language of section n 230 appeared in an agreement with japan and is still up for discussion.re we are frustrated about that i hope the administration is i listening as they have up to this point it does not appear to be reflecting the scrutiny the administration itself says they are a play onis supplying to see how it is utilized in american society that is even more alarming to export such policies without the involvement of this committee. to be clear this section of the 96 telecom act as a foundation for the information age so we are not here to condemn but to understand what truly is to see that the entire section is faithfully followed rather than just portions. back to the trade piece, i thought the letter to the ambassador with usmca i thought every trade agreement going forward i'm tired of this. then we found out it's in the other agreement. so clearly they are not listening to our committee or to us. we are serious about this and this is a real problem take h note. if we refer to section 230 is a 26 words that created the internet we are already missing the mark by the word count that excludes the good samaritan obligationsy so we should start talking more about that section as the 83 words to preserve thelu internet all of these provisions should be clearly taken together and not a part of many concerns can be ac addressed if they would just enforce their terms of service and that that would be a quick history lesson to be in order today the internet is different than compuserve and prodigy and the message boards to dominate the internet and the nineties now it is content rich more than ever before there were problems in its infancyt regulating speech online. the author of the legislation pointed out on the house floor that no matter how big the army of bureaucrats is it will not protect my kid because they don't think the federal government will get there inn, time. so congress recognizes then that we need companies to step up to the plate to curb harmful and illegal content from their platform but it's not meant meant to be managed and regulated by a government. section 230 with providers and users has the ability to go after the illegal or harmful content without being held liable in court. ed now we have seen social media platform slow to clean up sites but quick to use immunity for legal responsibility for in some cases they have shirked the responsibility for the content on their platforms the broader liability site on the shield has scared the central part of the internet platforms with user generated content are protected from liability intr exchange for the ability to make good faith efforts with harmful and illegal content. for those that want to be included and in terms of service that is an informative service today and illegal content with those small entities versus large ones thank you for having this hearing now i have to go to the other hearing. >> administration doesn't listen to you either quick. >> i will let my statement speak for itself. on[laughter] >> i will reiterate members pursuant to committee rules all opening statements are made part of the record to introduce our witnesses for today's hearing mister huffman cofounder of reddit we have professor of law at boston university school of law. legal director of the sincere foundation the executive director of the alliance to count crime online. global head of intellectual property policy for google and professor at the university of orcalifornia berkeley, welcome to all of you thank you for joining us today. we look forward to your testimony at this time the chair will recognize each witness for five minutes to provide an opening statement before we begin i will explain the lightingco system in front of you is a series of lights berkowitz initially green at the start of the openingng statement and will turn yellow with one minute remaining wrap up at thatta point it with a light turns red we cut your microphone off. know we don't but you are recognized for five minutes good morning chairperson and ranking members thank you for inviting me that's a cofounder and ceo of reddit i'm grateful to show section 230 is critical to our company and the open internet it is in a fundamentally different way than otherer platforms we have ffmmunities in this relies on 230 opposes threats not just to us but thousands of startups across the country to destroy what little competition remains. my college roommate and i started reddit as a user powered forum to find content. since then it has grown into a vast community they find news and laughs and a sense of belonging. it is communities that have been created and moderated by s it is a model that has taken years to develop with lessons learned along thehe way. i left the company in 2009 then it lurched from crisis to crisis over moderation issues were discussing today. in 2015 i came back because i realized the vast majority of our communities providing invaluable experience to users and reddit needed a better approach. the way reddit handles content moderation is unique the governments model for everyone self rules has a vote to self organize and shares with some responsibility how the platform works. first the fundamental rules that everyone must follow think of these as federal laws those data scientist collectively known as the anti- evil team enforce the policies below that each community creates their own laws that would be like state laws written by the volunteer moderators themselvesci to the unique needs of their communities and tended to be far more specific and complex that self moderation is the most scalable solution to moderating content individual users play a crucial role as well they can go up or down on any piece of content and report it. through the system of voting and reporting you can accept or reject content turning every user intro moderator. the system is imperfect is possible to find things that break the rules but the effectiveness has improved our efforts with an independent attack academic analysis shows we are effective in curbing bad behavior when we investigated russian attempts to manipulate our platform in 2016 they found out of all the accounts they tried less than 1 percent made it past through our team>>ti and down votes from every day users we constantly evil our policies and since my return we have admitted a series of updates on pornography, controlled goods. these are the ways we have worked to moderate in good faith which brings to the question of what reddit would look like without 230. for starters we be forced to be defend anyone with enough money to have a lawsuit no matter how frivolous. those cases are regarding defamation as an open platform where people can vote's voice critical opinions would be a prime target to enable censorship through litigationto even targeted limits will create a regulatory burden on the entire industry benefiting the largest companies by facing a significant cost of competitors. we have five employees in a large user base more than enough to consider a large company but tech pay we are an underdog we are a public companies are ten or 100 times our size. but we recognize there is harmful material on the internet we are committed to fighting that was important to understand rather than so that no opioid epidemic so those that are struggling on their way to sobriety so there is content and posting them has become too risky so this would be a disservice to people who are struggling but this is exactly the type of decision that restrictions on 230 would force honest. it is a uniquely american law a balanced approach to allow the internet platforms like ours to flourish will also the attempts to mitigate the downsides of free expression while these downsides are serious and demand the attention of us and industry in congress, does not outweigh the overwhelming good that 230 has done. thank you look forward to your questions. >> thank you for having me and having such a rich bench with me on the panel when congress adopted section 230 the goal was to incentivize tech companies to moderate content and although congress wanted the internet you can imagine at that time to be open andco freef we also knew that that openness would risk a sense of material. so what they did was devise a legal shield for good wasamaritans who were trying to clean up the internet so under and over filtering of content. now the purpose of the statute was very clear with the interpretation of what we have seen our courts massively overextending section 232 sites that are irresponsible on the extreme to produce extraordinary harm. we have seen the liability shield to be sites that the entire business model is abuse the sites that all they do is carry sex videos they can have immunity with no liability. interestingly not only is it bad to have enjoyed the legal shield of responsibility but also that has nothing to do with speech the traffic of dangerous goods and the cost are significant like the overbroad interpretation allows these sites with reckless irresponsible sites so i will take the case of online harassment i've been studying the past ten years the cost are significant especially to women and minorities. it is costly to people's opportunities so when a google search of your name contains a photo without your consent, home address in life defamation about you it's hard to get a job or keep the job and also for victims that are driven off lin line. they are terrorized they change their names and they move. so that free-speech calculus is not necessarily win for free speech as we see these diverse viewpoints and individuals. so now the market and they make money for online advertising.g. so that attracts eyeballs. and that we cannot rely on to solve the problem.et we have to's teat with section 230 with tremendous upside but we should return that to its original purpose and to be a good samaritan to what is reasonable content moderation practices. there are other ways to do it it in my testimony so we have to do something. it says to victims of online abuse thank you. >> the chair recognizes our next guest for five minutes. >> thank you as legal director i want to think the chairs, ranking members and members of the committee for the opportunity to share our thoughts with you today on this important topic. for nearly 30 years to represent the interest of technology users with these court case or is in broader policy debates to help ensure that law and technology supports our civil liberties.es we are well aware that online speech is not always pretty. sometimes it is extremely ugly and causes serious harm. we went an internet to meet and debate and organize and learn to have control over the online experience we want elections free from manipulation for women and marginalized communities to speak openly about their experiences. chipping away at the legal foundations of thebl internetrm and that's not the way to accomplish those goals. and that gets the voices out to the whole world without owning a newspaper. the law has thereby help to remove much of what stifled change to perpetuate the power imbalance that's because it doesn't just protect the tech giants but regular people. if you forwarded an e-mail with a piece of political criticism with a section of 230 with a neighborhood group that is the protection of section 230 if you use wikipedia to figure out when george washington was born you benefited from section 230 and if you are viewing online videos to document events real-tim real-time, you benefit from section 230. intermediaries where the social media platforms are protected by section 230 just for their benefit so they can be available. there is another to resist the impulse to more actively monitor and moderate user content simply put they are bad as they have shown they regularly have all types of valuablele content because it's difficult to draw a clear line between lawful and unlawful speech particularly at scale and that often silences the voices of the already originalist people. moreover increased liability that leads to censorship it's a lot easier and cheaper to take something down and to pay lawyers to fight over it especially as a small business or nonprofit. that is not the solution context matters and robots are bad at nuance. for example december 2018 tumbler announced a new ban on adult content they identified several types that was accessible under the new rule shortly thereafter its own filtering technology flagged those same images as unacceptable. new legal burdens are likely to stifle competition facebook, google can afford that moderation automation those competitors don't have that type of budget so in essence we would have opened the door then slammed it shut for everyone else the free and open internet was never free or open but it still represents the extraordinary idea that anyone with a competing device to tell their story to organize and educate it makes that idea a reality and it's worth protecting i look forward to your question. >> you are now recognized for five minutes. >>th distinguished members of the subcommittee it's an honor to be here today to discuss one of the premier security threats of our time that congress is well-positioned to solve as executive director our team is made up of endemic security experts and citizen investigators have come together to eradicate terror activity on the internet i want to thank you for your interest in our research and for asking me to join the panel of witnesses today like you i hope tonk share the testimony of us trade representative because it is critical to national security distinguished members i have a long history to track overnight is organized crime and terrorism i wrote a book about the taliban in the drug trade recruited by us military leaders imap to transnational crime networks for special operations command and then to receive state department funding and that's when my team discover the largest market for endangered species was located in social media platforms so looking at crimes more broadly to have a range and scale ofdi illicit activity it is far worse than i ever imagined. we can and must get this under control. under the original intent there was supposed to be a shared responsibility between tech platforms and law enforcement and organizations like a acl but they are failing to uphold their end of thebe bargain because of broad interpretations from the courts they have undeserved safe harbor from illicit activity the tech industry may try to convince you today that most illegal activity is confined to the dark wet but that's not the case surface web platforms with payment systems have a much greater rich of people industry to people those ranging from mexican drug cartels that have weaponize social media platforms like us publicly listed platforms to have a wide range of illegal goods now in the midst of a public health crisis of the opioid epidemic facebook the world's largest social media company only began tracking drug activity last year and within six months identified one.5 million post selling drugs that is what they admitted to removing. to put that into perspective it's 100 times more than the dark websites ever carry. study after study of acco members have shown widespread use of google twitter facebook reddit youtube to sell fentanyl oxycodone and other highly addictive - - addictive in direct violation of us federal law every major internet flat platform sells drugs because there is no law to hold tech firms responsible even if a child dies they play an active role to facilitate harm those algorithms originally designed connect friends also help criminals and terror groups connect to a global audience isis and other groups use social media to recruit and fund raise and spread propaganda acco alliance includes an interior one - - the incredible team looking at online trafficking and in many cases those by isis supporters also instagram google and those from the elephant library and in some cases the market is literally threatening the species. i can continue with illegal dogfighting or children being sexually abused human remains counterfeit goods are all just a few clicks away. routinely says modifying is a threat to freedom of speech but it is a law about viability. please try to imagine another industry that has ever enjoyed a subsidy from congress total immunity matter what harm it brings to consumers it with those internal controls but it was cheaper and easier top scale. they were given this incredible freedom they have no one to blame but themselves we want to see reforms to the law for immunity stripped from hosting content that they must report crime and terror activity to law enforcement and appropriations to law enforcement the distinguish committee members that ought to be illegal to host that online it is imperative we make the internet a safer place for all. thank you very much. >> you are now recognized for five minutes spent chairman ranking members and distinguished members of the committee thank you for the opportunity to be here before you today i appreciate your oleadership and welcome the opportunity to discuss google's work i am the global head of it policy at google classifies the company on the public policy framework with ome moderation of online content at google the mission is to organize to make information universally accessible and useful our services and many others have creativity learning and access to information to create innovation has economic benefits to c the united states so with those communications coming before us and this is why we have robust policies and guidelines and we update regularly so my testimony today will focus on gray areas how it has helped thee internet grow and contributes to take down harmful content section 230 of the communications decency act has an economic - - intra- ecosystem also to enable providers to take aggressive steps to fight online abuse to sign those legitimate content with $29 trillion each year. addressing that legal content is the ability to take action on problematic content by 230 that not only clarifies when services could be held liableac but also the legal certainteed necessary without harmful content of all types. it is specifically introduced to incentivize self-monitoring with content moderation. not to alter platform liability of federal criminal law that i have exempted from the scope so that important has only grown as the shared as a recent study shows over the next decade to 30 will contribute an additional four.2 5 million jobs and $440 billion to the economy and furthermore weakening the system would have an impact on investment internationally to 30 as a differentiator china and russia and others take a different approach to innovation in sensor speech online it is critical of political leaders the best way to understand the importance is to imagine what might happen with those political blogs in the sites of all kinds either atop moderate content with those reliable services so without those platforms to be sued to remove content from platforms of those relating to pyramid schemes so with those rigorous policies and for each product we have a specific set of rules and guidelines for how it is used. these purchases are policies of the community guidelines of a sliding mechanism increasingly effective machine learning to facilitate removal more than they've ever been able to access. in the three-month period in 2019 over 9 million videos from our platform were removed for violating community guidelines and 87 percent of that content was flagged by machines first rather than humans of those detected by machines 81 percent was never viewed by a single user. those that we have invested hundredsve of millions of dollars and in my written testimony i go into further detail of how policies and procedures that we are submitted to being responsible actors and google will continue to invest in the people and the technology to meet this challenge so as it examines these issues thank you for your time i look forward to your questions. >> you recognize. >> those that have emerged in decades and at the same time many horrific things have emerged massive proliferation of sexual abuse material and of domestic and international terrorist. the ability of that proliferation campaigns with civil unrest to disrupt democratic elections. those dangers of conspiracy theories. the harassment of women and underrepresented group in the form of threats of sexual violence small and large scale fraud to protect our personal data. how in 20 short years did we go from the promise of the internet to democratize access to make that more enlightened the combination of ill willful ignorance that has led the titans of tech failed to install proper safeguards on their services. what we have faced is not new in 2003 it was well known it was going with child predators but they dragged their feet and did notsp respond to the known problem at the time nor did they put in place the proper safeguards for what should have been the anticipated problems we face to . . . . >> the services don't seem to have trouble dealing with unwanted material. they routinely and effectively remove copyright infringement and effectively remove legal adult pornography because otherwise their services would be littered with pornography giving away advertisers . during its 2008 congressional testimony mister zuckerberg reputedly invoked arctic artificial intelligence as the savior for content moderation and we are told in five years. inside it's not clear what we should doin the greening decade or so, the claim is overly optimistic . for example earlier this year facebook's chief technology officer showcased facebook's latest ai technology for discriminating images of marijuana. despite all of the latest advancing ai and pattern recognition the system is only able to perform the tasks with an average accuracy of 91 percent. this means one in 10 times the system is wrong. ma sale of 1 billion uploads they technology cannot possibly automatically moderate content. this discrimination casts is surely much easier than the task of identifying a broad of expectations extremism and disinformation material. the promise of ai is just that, a promise and we cannot wait a decade or more to the ai will improve my nine orders of magnitude when it might be able to contend with automatic onlinecontent moderation . complicated d is even more earlier this year mister zuckerberg announced facebook is implementing an end encryption audit services, preventing anyone, facebook, to see the contents of any communication . the implementing end encryption will make it more difficult to contend with a of abuses i enumerated at the opening of my remarks. we can and must do better when it comes to tech contending with some of the most violent article dangerous and hateful content online. simply reject the naysayers that argue that it is too difficult from a policy or technological perspective or those that say that fireasonable and responsible moderation will lead to an open exchange of ideas. thank you and i look forward to your questions. >> thank you doctor freed. well, we concluded our opening and were going to move to number questions. each member will have five minutes to ask lessons allow witnesses and i will start by recognizing myselffive minutes . but i have to say when i said at the beginning of my remarks, this is acomplex issue , it's a very complex issue and i think we've all heard d the problems. what we need to hear is solutions. let me just start by asking all of you just buy a show of hands who thinks that online platforms ticould do a better job ofmoderating their content on their websites ? so that's unanimous. and i agree, i think it's important to note that we all recognize content moderation online is lacking in a number of ways and that we all need to address this issue better. and it's not you who are the platforms and the experts in this technology , and you put that on our shoulders, you ymay see a lot of that you don't like very much and that has a lot of unintended consequences for the minternet. so i would say to all of you you need to do a better job, you need to have an industry getting together and discussing better ways to do this. the idea that you can buy drugs online, and we can't stop that, to most americans , they don't understand why that's possible. why it wouldn't be easy to identify people that are trying to sell illegal things online and take those sites down. abuse, it's very troubling. on the other hand i don't think anybody on this panel is talking about eliminating .. section 230. so the question is what is, what is the solution between not eliminating 230 because of the effects that would have to us on the whole internet, and making sure that we do a better job of policing this. mister hoffman read it. a lot of people know have read it but it's a relatively smallcompany . when you place it against some of the giants and you host many communities and you rely on your volunteers to moderate discussions. but i know that you shut down a number of controversial celebrates that have spread the face, violent and disturbing content, misinformation and dangerous conspiracy theories. but what would read it look like if you were legally liable for the content your users posted or for your company's decision to moderate user content in committees? >> thank you for the question . what it would look like would be forced to go to one of two extremes. in one version, we would stop looking. we would go back to the 3230 era which means if we don't know, we're not liable. and that i'm sure is not what you intend and it's not what we want area would be not aligned with our missionof bringing community and belonging to everybody in the world . the other extreme be to remove any content or prohibit any content that could be remotely problematic . and since reddit is a platform where100 percent of our content is created by our users , it fundamentally undermines the way reddit works. it's hard for me to give you an honest and of what it was like i'm not sure reddit as we know it could hexist in a world where we had to remove all user generatedcontent . >> you talk about the risk to free speech of section 220 would be repealed or substantially altered but what other tools could congress use to incentivize online platforms to moderate dangerous content and encourage help your online ecosystem. what wouldyour recommendation be short of eliminating 230 ? >> i think a number of the problems that we talked about today so far which i think everyone agrees are very serious and i want to underscore that. are actually often addressed by existing laws that target the conduct itself. for example, the armless case, we had a situation where one armless, the selling of the guns was controversial was actually perfectly legal under wisconsin law. similarly, many of the problems we talked about today are already addressed by federal criminal laws area that already exists so they are, section 230 is not a barrier because of the carveout for federal laws. i would urge this committee to look carefully at the laws that target the actual behavior that we are concerned about and perhaps start there. >> mysterious, you did a good ejob fortifying us with your testimony. what solution do you offer short of repealing230 . >> i don't propose repealing 230. i think we want to continue to encourage innovation in this country. it's our core economic, or driver of our economy. but i do believe that if, if cda 230 should be revised so that if something is illegal in real life, it is illegal to hosted online. i don't think that is an unfair burden for tech firms, certainly from the wealthiest firms in our country to be able to take bad on. i myself have a smallbusiness . we have to run checks to make sure when we do business with foreigners that we are not doing business with somebody dy that's on a terror blacklist. is it so difficult for companies like google and reddit to make sure they're not hosting an illegal pharmacy ? >> i think we get the gist of your answer . chairman now yields to the writing number 45 minutes. >> thank you misterchairman and types are witnesses . if i could share with you, a recent new york times article online had the horrendous nature of child sex abuse and how it has accidentally grown overthe last decade . my understanding is that companies are only legally required to report images of child abuse when they discover it. they're not required to actively look for it. my understanding make voluntary efforts to look for this type of content, how can we encourage platforms to better enforce their terms of service or proactively use their sore provided by subsection c of action to 30. a good faith effort to create accountability within platforms. >> thank you for the question and particularly for focus on on the importance of section c to incentivize platforms to moderate content. i can say that for google we do think transparency is critically important so we publish our guidelines . we publish our policies . we publish on youtube a quarterly transparency report every show across the different categories of content what is the volume of content we've been removing and allow for causers to appeal so if their content is stricken and they think that was a mistake they also have the ability to feel appeal and track what is happening area we do understand that thesis transparency is critical to user trust and for discussion of the policymakers on these critically important topics . >> mister truong, a number of defendants have claimed action 230 immunity in the courts. some of which are tech platforms that may not use any user generated content at all. was section 230 intended to capture those platforms ? >> i keep doing that. or platforms that are solely responsible for the content there's no, the question is there's no user generated content and if they are creating the content, that's the question not being covered by the legal shield. i'm asking, is that the question -mark no, they would be responsible for the content that they created and developed. so section 230, that legal shield wouldnot apply . >> mister paris, are there tools available like total dna or copyright id to flag the sale of illegal drugs online? if the idea is platforms to be incentivize to actively scanner platforms and take down blatant illegal content, shouldn't keywords or other indicators associated with opioids the searchable through an automated process -mark. >> the short answer is yes, there's two ways ofdoing content moderation . once material has been identified either by human moderator whether the child abuse material, illegal drugs, terrorism -related material, whatever it is, material copyright infringement anything for a fingerprinted and then stuck from future upload and distribution area technology has been understood and deployed overa decade . i think it is and deployed unevenly across the platforms and not merely aggressively enough that one form of content moderation that works today. the second form is what i call the day zero. finding the christchurch video on upload. that is incredibly difficult and still requires law enforcement, journalists with the platforms themselves fine but once that content has been identified it can be removed from future upload and i will point out by the way today you can go on to google and you can type by fentanyl online and it will show you in the first page illegal pharmacy where you can click and purchase no. that is not a difficult line. are not talking about the dark when, things buried on page 20 , is on the first page and there is no excuse for that. >> give me follow-up because it's anemic when some of the, platforms might you be doing out there last year we passed over the pieces of legislation with the drug crisis we have in this country area fentanyl being one of them and you mentioned that you can find it if you type in fentanyl. what we're trying to do is make sure we don't have to 72,000 deaths that we had in this country a year ago and with over 43,000 associated with fentanyl. how do we go in to the platform and say we got to enforce this because we don't want this to going from china and how do we do this -mark. >> this is what the conversation is. i'm with everybody else, we n don't repeal 230 but we make it a responsibility, not a right. if your platform can be westernized in the way we've seen across the board from the litany of things i had in my opening remarks, really something is not working. hii can find on google in page 1 and not just me, my colleagues on the table, investigative journalists, we know this content is there. we have to ask the question is if a reasonable person can find this content, surely google with its resources eonan find it as well and now what is the responsibility and you said earlier to that you enforce your terms of service so if we don't want to talk about 230, let's talk about your terms of service. the terms of most of themajor platforms are pretty good .n' it's just that they don't do much to enforce them in a clear consistent and transparent way. >> mister chairman, mytime has expired and i yelled back . >> chair recognizes miss czajkowski, care for the consumer reduction commission for five minutes. >> miss oyama, you said in one of the sentences that you presented to us that without 230, i want to see if there's any hands that would go off that we should abandon 230. so this is not the issue. this is a sensible conversation about how to make it better. mister hoffman, you said, and i want to thank you. we had i think i really productive meeting yesterday , explaining to me what your organization does and how it's unique but you also said in your testimony that section 230 is a unique american loss and so yes, yet when we talked yesterday you .thought it was a good idea to put it into a trade agreement dealing with mexico and canada. if it's a unique american law let me say i think trying to fit it into the regulatory structure of other countries this time is inappropriate and i would like to just quote, i don't know if he's here from a letter that both chairman alone and writing member walden wrote some time ago. to mister like i sure that said we find it inappropriate for the united states to export language mirroring section 230. while such taserious policy discussions are ongoing. and that's what's happening rightnow . we're having a serious policy discussion but i think what the chairman was trying to do and what i want to do is try to figure out what do we really want to do to amend or change in some way. so again, briefly, if the three of you that have talked about the need for changes, let me start with miss drone. what you want to see in 230? >> so i'd like to bring the statute back to its original purpose was to apply for good samaritan engaged in a possible and reasonable content moderation practices. and we can have the language to change the statute that was condition, that were not going to treat a provider or user of an interactive service. gethat engages in reasonable content moderation practices as a publisher or a speaker, so it would seek immunity. >> let me suggest that if there's language, i think we like to see you suggestions. ms. peters, if you could, and i think you've pretty much scared us.as to what is happening. and then out we can make 230 responsive to those terms. >> thank you for your question. we would love to share some proposed language with you about how to reform 230 to protect that are against organized crime and terror activity on a platform. one of the things i'm concerned about a lot of tech firms are involved in is when they detect illicit activity or it gets flagged to them by users, the response is to delete it and forget about. what i'm concerned about is that two things, number one, that essentially onis destroying critical evidence of a crime.it's actually helping criminals to cover their tracks as opposed to a situation like what we have for the financial industry and even aspects of the transport industry. they know illicit activity is going on. have to share it with law enforcement and have to do it nin a certain timeframe area i certainly want to see the content removed but i don't want to see it simply deleted . i think that is an important distinction. i would like to see a world where the big tech firms work collaboratively with civil society and with law enforcement to root out some of these evil. >> funding cut you off because my time is running out and i do want to get to doctor circ with the same thing. i would welcome concrete suggestions. >> i agree with my colleague. i think 230 should be a privilege, not a right you have to show your doing reasonable content moderation . we should be worried about itthe startups but we start regulating now, the ecosystem will become even more monopolistic. t we have to think about how we make carveouts of small platforms who can now compete where these companies did not have to deal with that regulatory pressure and the last thing i will say is the rules have tobe clear , consistent and transparent. >> thank you, ideal back. >> the chair recognizes miss rogers for five minutes. >> 90 mister chairman. section 230 was intended to provide online platforms a shield from liability as well as afford to make good faith efforts tofilter, block or otherwise address certain content online . professors site run, do you believe companies are using this and if not why you think that is? >> you're asking the dominant platforms, i've been working with facebook and twitter for about eight years so i would say the dominant platforms that both on this panel at this point are engaging in. what i e would describe as broad level, fairly reasonable content moderation practices. i think they should be far better on transparency, about what they my when they permit a seat, what do they mean by that mark what's the harm that they want to avoid? they could be more transparent about processes that they use when they make decisions. to have work accountability. but what really worries me are the sort of renegade sites as well. the 8chan that foam and things. and frankly sometimes it's the biggest of providers, not the small ones who know they have illegality happening on theirplatform and do nothing about it. why are they doing ? how this section 230 having unity only dating grinder comes to mind, hosting impersonations of someone's ask and the person was using writer to send thousand of men to dismantle. 50 times from the individual being targeted and knew nothing about it . when they responded after getting a lawsuit, their response is our technology doesn't allow usto track ip addresses . but writer is fairly dominant inthe space . when the person went to a smaller data site, the impersonator was again imposing as the individual sending men to his home. they said we can ban the ip address in arabic so i think the notion that the smaller versus large by my license is for good practices, thoughtful practices and irresponsible practices. >> thank you for that. mister hoffman and miss oyama, the new company policy specifically prohibit illegal content on your platforms? regarding your terms of services, how do you monitor content on your platform to ensure it does not violate your policy ? >> so in my opening statement i describe three layers of moderation we have on reddit. our company moderation and our team, this is the group that both rights the policy and enforces the policies. primarily the way they work is enforcing these policies at sale so looking for operational behavior, looking for known problematic lights or worse, we participate in a cross industry hack sharing which allows us to find images or example exploited children shared or fingerprints thereof. next though are our community moderators. you saw the people who, you saw users and those two groups participate together in reliving content that's inappropriate for their community and in violation of ourpolicies . we have policies against hosting, one of the points is no illegal content and no regulated goods, no drugs, no guns, anything of that sort. >> you're seeking it out and if you find that you get it off the platform. >> that's right, because 230 doesn't provide us criminal liability but we are not in the business of helping people commit crimes, that would be problematic so we do our best to make sure it's not on our platform. >> we address that and what you're doing if you find illegal content . >> we have very clear content policies, we publish those --online. we youtube videos that give more examples specifically so people understand. we are able to detect of the 9 million videos that we removed from youtube in the last quarter 87 percent of those were detected first by machine so automation is one very important way and then a second way is human reviewers so we have committed the flagging where any user that seesproblematic content can flag it and follow what happens with that complaint . we also have human reviewers andwhere very transparent in explaining that . when it comes to criminal activity on the internet , 230 has a complete carveout so in the case of grinder, we have policies against harassment , but in the case of grindr where there was no criminal activity my understanding is in that case there's a criminal case for harassment and stalking. their proceeding against them so in certain cases opioids again, controlled substance under criminal law there's a section that says i think told substances on the internet , sale of controlled substances on the internet is a provision area in cases like that where there's a law enforcement role , if there's collects legal process we would work with law enforcement to provide information under due process . >> my time has expired. >> you're recognized for five minutes. >> i really want to do this panel. i'm a former constitutional lawyer so i'm always interested in the intersection between criminality and free speech and in particular, professor citron i was reading your testimony with you conferred with mister jackowski about how section 230 should be revised to both continue to provide first amendment protections but also return the statute to its original purpose which is to let companies act more responsibly, not less. and in that vein, i want to talk very during my line of questioning about online harassment because this is a real central harassment, this is a real issue that has just only increase and i and-definitionally reported that 24 percent of women and 63 percent of lgbt q individuals experience online harassment because of their gender or sexual orientation. and this is compared to only 14 percent of men. and 37 percent of all americans of any background and experience severe online harassment includes sexual harassment, stalking, physical threats. so i want to ask you professor, also i want to ask you misspeters briefly , to talk to me about how section 230 facilitate illegal activity and do you think it undermines the value of those laws and if so, how you. >> let me say that in cases involving harassment, of course there's a perfect trader and then the platform that enables it. most of the time the perpetrators are not pursued by law enforcement so my book , i explore the fact that law enforcement really just don't understand the abuse. they don't know how to investigate and in the case of grindr, there were 10 orders that were violated with law enforcement that did nothing about it not true that we can always find the perpetrator nor especially in the cases of stalking, harassment and threats. we see a severe under enforcement of law, particularly comes to gendered arms. >> that's where it falls to the sites to try to protect. miss peters,you want to comment on the ? >> in this issue be something akin to a cyber training order so that if somebody is talking somebody on grindr or okcupid or google, that can be ordered to block that person communicating with the other.>> and even under section 230 immunity, and platforms rignore requests to take down this type of cereal? >> a half. >> professor, you're not in your head. >> you and they can, especially if those protected orders coming from state criminallaw . >> i wanted to ask you, doctor mcsherry, sexual harassment continues to be a significant problem on twitter and other social forms. and i know section 230 is a critical tool that facilitates content moderation, as we've heard in the testimony, a lot of the platforms are being aggressive enough to enforce the terms and conditions so i want to ask you is wewhat can we do to encourage platforms to be more aggressive in protecting consumers and addressing issues like harassment. >> i imagine this hearingwill encourage any of them . but you know. >> we keep having hearings. >> i understand,absolutely and i understand .i think many of the platforms are pretty aggressive already in their constant moderation policy.re i agree with what many have p said here today which is that it would be nice if they would start by really enforcing their actual terms of service which we share a concern about because often their force very inconsistently and that is very challenging for users. a concern that i have is if we institute what i think is one proposal which is whenever you get anotice , you have some duty to investigate, that could actually backfire or marginalized communities because one of the things that also happens is if you want to silence the one online , one thing you might do is flaunt a service provider with complaints about them and then they end up being the ones who are silent rather than the other way around. >> with your view of that? what's your view of what doctor mcsherry said. >> there are two issues at hand. once you moderation you rest over moderating or under moderating. i would argue we are way under moderated. when you look at where we fall down and where we make unmistakes , take on content and i waited against 5 million pieces of content just last year to make a child abuse material and terrorism and drug , believes are imbalanced. we have to try to get it right. we are going to make mistakes but we're making way more mistakes on allowedcontent and we are on not allowing . >> thank you, i yelled back. >> chair recognizes mister johnson for five minutes. >> thank you and to you and to chairman jackowski for holding this very important hearing, i've been in information technology for most of my adult life and social responsibility has been an issue that i have talked about a lot. in the absence of heavy-handed government, and regulating, that i think the absence of regulations is what has allowed the internet and social media platforms to rgrow like they have, but i hate to sound clichc but that old line from the jurassic park movie.i sometimes we are more focused on what we can do and we don't think about what we should do. so i think that's ocwhere we find ourselves with some of this. we've heard from some of our witnesses accessibility of a global audience to internet platforms is being used for illegal and illicit purposes like terrorist organizations and even from the sale of opioids which continues to severely animpact communities across our nation, particularly in rural areas i live in in southeastern ohio. however, internet platforms provide an essential tool for legitimate indication and the free, safe and open exchange of ideas which has become a vital component of modern society in today's economy. i appreciate hearing them all of our witnesses as our subcommittees examine whether section 230 of the communications decency act is empowering internet platforms to effectively self regulate under this light touch framework. so mister hoffman, in your testimony you discussed the ability of not only reddit employees but it's users to self regulate and remove content that goes against reddit's stated rules and standards g. the other social media platforms, facebook or youtube and have been able to successfully implement successful self-regulating rmfunctions -mark if not, what makes reddit unique in their ability to self regulate ? >> i am only familiar with other platforms to the extent that you probably are which is to say i'm not an expert . i do know they are not sitting on their hands. i know they're making progress but the reddit model is unique in the industry in that we believe the only thing that scales with users is users so when we're talking about user generated content, sharing some of this burden with those people in the same way that our society here in the united states, there are many unwrittenrules about what is acceptable or not to say , the same thing exists on our platforms and by allowing empowering our users andcommunities to enforce those unwritten rules, it creates a more healthy ecosystem . >> miss oyama, in your testimony you discussed the responsibility of determining which content is allowed on your platforms including balancing respect for diversity area having a platform for marginalized voices . with a system like reddit's of boats and down votes impact the visibility of diverse viewpoints on platforms like youtube and you dislikes on youtube impact the videos visibility? >> thank you for the question. have you seen it, users can thumbs up or thumbs down videos. as one of many signals area wouldn't be determinative in terms of a recommendation of a video, that would mostly be irrelevant and i really appreciate your point about responsible content moderation. i did want to make the point that on the piece about harassment and bullying, we did remove 35,000 videos from youtube in the last quarter and we can do this because of cdh 230. whatever someone's contents are moved they may upset but there could be cases against the service provider or defamation, or breach of contract and service providers large and small are able to have these policies and implement procedures to identify back content and take it down because of the provisions of cdh 230. >> i've got some other questions i want to submit for the record but let me summarize this because i want to stay within my time. and you're going to require me to stay within my time. in the absence of regulations, as i mentioned in my opening remarks, takes social responsibility to a much higher bar. and i would suggest to the entire industry of the internet, social media platforms, we better get serious about this self-regulating or you're going to force congress to do something that thyou might not want to have done. with that, i yelled back. >> chair recognizes miss matsui for five minutes. >> thank you mister chairman and i want to once again thank the witnesses for being here today. this oyama and mister hoffman, last week senate intel committee released a bipartisan report on russia's use of social media. the report found that russia used social media platforms to so social discord and influence outcomes of 2016 elections. what role in section 230 play in ensuring that platforms are not used again to disrupt our political process. >>. >> ca2 30 is critically important for allowing services to protect citizens and users against foreign interference in collections is a critical issue, especially with theelection cycle coming up . we found on google across our systems in 2016 elections fortunately due to the ct measures we met today and add removals, there were only two that had infiltrated our systems. i had a friend of less than $5000 back in 2016. we continue to be extremely vigilant and we do publish a political transparency report. we require that ads are disclosed who pay for them. they show up in the library, they need to be -- >> so you feel your obsessive . >> we can always do more but on this issue we arefocused on working with thesecretary . >> mister hoffman . >> so from 2016, you found that we saw the same news and misinformation submitted to our platform aswe saw on the others . it is on reddit, on reddit it was largely rejected by the users before he came to our attention. if there's one thing reddit is that that being skeptical and rejecting all sources or question everything. for better or worse. between then and now we s become dramatically better at finding groups of accounts that are working in a coordinated orinauthentic matter and we coordinate with law enforcement . we find everything we learned in the past going forward, i think we're in a good position into the 20/20 election. >> doctor karis, in your testimony you mentioned that disinformation campaign designed to disruptdemocratic elections . this troubles me and a lot of other people. you mentioned there's more of a platform should be doing about moderating content online. what more should they be doing about this issue now, this time. >> let me give you one example. a few months ago wesaw the video of your policy make the rounds . and the response was interesting. facebook saidwe know they . but we're leaving it up. we are not in the business of telling the truth. that was not a technological problem, that was a policy problem. was not satire, not comity. it was meant to discredit the speaker so i think mefundamentally we have to relook at the rules and if you look at books rules, it says you cannot post things that are misleading or fraudulent.that was a clear case where technology work. a policy on ambiguous and r they failed on the policy. you credit down and the twitters discredit didn't even respond to the issue so in some cases thereis a technological issue butmore often than not we are simply not enforcing the rules that are in place . >> so that the decision they made . >>. >> so miss oyama, what you think about what mister reed just said mark. >> our son. there are two aspects of this, first physically, towards reddit we have policy against impersonation. so if a video like that, they can both be used to manipulate people or service, misinformation, it's also raises questions about the veracity of the things that we see and hear and promise important discussions so the context about whether around whether a video like that is up or down on reddit is important and those are difficult decisions . i will observe that we are entering into a new era. where we can manipulate videos. it's historically been able to manipulate images with photoshop, and now video so i do think not only the platforms have a responsibility, but we as a that ty have understand the source of materials and which publication is critically important because there will come a time no matter what mike pierce say where we will not be able to detect that sort of a very real x ideal. >> on a specific piece of content you mentioned, we do have a policy again but there is ongoing work that needs to be done to be able to better identify defects. even comedians times use them but in political context or other places could undermine democracy and we opened up buasf, where working with researchers to build technology can better detect when media is manipulated. >> i appreciate the comment. i have a lot more to say but you know how thisis ideal back . >> chair recognizes mister kensington for five minutes . >> thank you all for being here.we appreciate it. it's interesting on the last line of questions, one of the best things about democracy is our ability to have free speech but this can also be something that is a real threat . i think the chairman were yielding and i think it's safe to say not remember of congress has a plan to do about section 230 of the k communications we all agree that the hearing is warranted we need to have a discussion about the origins and intent of that section and whether the companies enjoy these liability protections are operated in a manner intended and i'll state up front that i generally appreciate the efforts certain platforms have made over the years to remove the block unlawful content. but i'd also say it's clearly not enough and that the status quo is unacceptable. it's been frustrating for me in recent years that my image and variations of my name have been used by criminals to defraud people on social media andthis goes back 10 years to the purchase of two 100 , on one note about these skins are increasingly persuasive and i not only scbrought it up and hearing with mark zuckerberg last year, i wrote him again this summercontinue to press him more boldly to protect his users .so i have a question. sources indicate in 2018, people reported hundreds of millions of dollars lost online scammers with 143 million through romance scams . given what so many people have gone through its become more and more important for platforms to verify user authenticity. so both mister hoffman and miss oyama, what do you platforms do to verify the authenticityof user accounts ? >> thank you for the question . again, there are two parts to my answer. the first is on the scams >themselves . understanding is it's probably referring to scams that target veterans in particular. we have a number of veterans communities on reddit. like all of our communities they create their own rules and these communities have created rules that prohibit fundraising in general because the community, members of those communities no they can be targeted this sort of scam in particular so that's the sort of nuance we think is important and highlights the power of our connecting model because i as a nonveteran might not have had that same sort of intuition. in terms of what we know about our users, we're different from our peers in that we don't require people to share their real-world identity. we do know where they sh register from, what it they use, maybe their email address but we don't force them to reveal their full name whether it's gender and this is important because on reddit there are communities that discuss sensitive topics. and those various veteran communities for example, drug addiction communities uare communities for parents going being a parent, these are not things but he would go on to a platform like facebook and say i don't like my kid. >> i don't mean to cut you off but i want to go to miss oyama . >> sorry to be handed that topic to you, on youtube we have a policy against impersonation so if you were ever to see a channel i impersonating you, there's a form where they can upload their government id, but that would result in the channel being struck on search. that can show up across web searches and we're trying to give relevantinformation to our users every day on search . this across 19 billion links o that could be scanned within the users and there's something called the wrist engine that can kick out fraudulent accounts before they enter. >> i'm not upset about the sites that are like kensington the worst congressman ever. that's understandable i guess for some people when you have again in my case somebody that as an example, multiple flew from india because she thought we were dating for a year not to mention all the money she gave to this perpetrator and all these other stories. i think one of the biggest and most important things is people need to be aware of that. if you have somebody over a period of years getting you and never authenticatingthat, it's probably not real . what are the risksassociated with people not being able to trust other users ? >> i think there are multiple risks of that, but i want to come back to the key issue with us which is the sites should be required to hand over data to law enforcement to work collectively with law enforcement. we've heard a lot today from the gentleman from reddit about theirefforts to better moderate . some of our members were able to go online the other day , type in a search for by fentanyl online and came up with many results . by adderall online, without prescription. these are fairly simple search terms and i'm not talking about a simple high bar to get rid of that on your platform doesn't seem too hard or to have that no automatically directs to a site that would adviseyou to get counseling for drug abuse . we're not trying to be a thought police. where trying to protect people from organized crime and terror activity. >> i'll yield back, i have a bunch morequestions . >> for the record, i want to say i don't think this gentleman is the worst member of congress. >> i don't even think you're at the very bottom. you're not a bad guy. chair recognizes this castle five minutes. >> thank you chairman doyle for organizing this hearing and thanks to all of our witnesses for being here today area i'd like to talk about the issue of 230 in the context of this horrendous tragedy in wisconsin a few years ago.in the arms list.com where a man walked into a salon where his wife was working and shot her dead in front of their daughter and killed two others in that salon and then killed himself . this is the type of horrific tragedy is all too common in america today. you mentioned, i think you said that was all legal but it wasn't because two days before the shooting, there was a temporary restraining order issued against that man . he went online shopping on arms list.com two days after that dro was issued and the next day he commenced his murder spree. and what happened is arms list knows that they have domestic abusers shopping. they got felons, they got terrorists shopping for firearms and yet they're allowed to proceedwith . earlier this year the wisconsin supreme court ruled that arms list is immune, even though they know that they are graduating illegal content and these kind of tragedies. they said the wisconsin supreme court ruled arms list is in your because of section 230. they basically said it did not matter that arms list actually knew or even intended at its website would facilitate illegal firearms sales to dangerous persons. section 230 still granted immunity and ms. peters, this is not an isolated incident. we're talking about child sexual abuse. illegal drug sales. gone way too far. so i appreciate you all have proposed some solutions for doctor, you highlighted a safe harbor that if companies use their best efforts, to moderate content, they would have some protection but how would this work in reality? with this event is left up to the courts and those type of liabilities ? those losses which the need for clear standards out of congress.is >> yes, it would and thank you so much for the question. and how would we do this -mark it would be in the courts. it would be an initial motion to dismiss, the company would then whoever's being sued question would be are you being reasonable in your content moderation practices written large? not with regard to any one piece of content or activity, and it's true that it would then be a forcing mechanism in federal court to have companies that explain what constitutes reasonableness. i actually could come up with all of us with some basic sort of threshold of what we think is reasonable content moderation practices, what we might describe as technological due process . piracy, having a process. clarity about what it is you prohibit but it's going to have to be case-by-case, context by context because what's reasonable response to a deep faith and i spent a considerable amount of work on them is going to be different from the kind of advice i would give facebook, twitter and others about what constitutes a threat and how someone figures that out. i'm thinking about doctor freeze testimony about what we do about there's certain -- >> it would be in the public interest that if this is explicit illegal content that theydon't , it wouldn't wind up as an issue of fact in a lawsuit. what do you think -mark if it's illegal contentonline , there really should be a debatable question. >> i'm a mathematician by training so i don't know if you want to be asking the question that i agree with you. in some cases what we've seen over the years and we saw this when we were deploying photo dna is the technology companies want to get you muddled up in the gray area though we had conversations nvwhen we're trying to abuse remove child-abuse material say what happens when it's an 18-year-old ? what happens when it's not sexually explicit and those are propagated questions but there's bad behavior, if you're doing awful things kids as young as two months old. >> my time is short but there's also an issue with the number of moderators being hired to go through this content, the publication called the bird had a horrendous story of facebook moderators and caught my attention because one of the places is in tampa florida, my district. i'm going to submit follow-up questions about, of moderators and some standards for that practice and i encourage you to answer and send it back. >> the gentle lady yield. and now the chair recognizes the gentleman from illinois for five minutes. >> it's great to be with you, i'm sorry i've missed a lot of this because i'm upstairs but in my 23 yearsof being a member , i've never had the chance to address the same question that two different panels onthe same day .be so it's kind of an interesting convergence. upstairs we were talking about ebay being underage use and what's in the product area so i was curious and we were in opening statements here, someone and i apologize unless you mentioned two cases. one was dismissed because they really did nothing. and one, the one who tried to be a good actor got slammed, i don't know about slam but i see a couple heads. first, ms. citron, can you address that? >> those are cases that give rise to section 230 what animates him to go to ron wyden and say we got to do something about this, there are a pair of decisions in which one if you do nothing, you're not going to be punished for it but if you try and you moderate, that heightens your responsibility . >> so nobody goes unpunished. >> that's why we're all in this agreement, that's why we're here today in many respects. >> i tie this and what's going on upstairs and someone uses a platform to encourage underage vaping with unknown nicotine content and the site decidesto clean it up , because of the way the law is written right now, this good deed which we most of uswould agree that probably is a good d would go punished . >> now we have section 230 which is why we have section 230. they are encouraged to what they're doing in good faith under section 230 c2. they can remove itand they are good samaritans . >> so that is the benefit of it. is there here, okay. in this debate that we heard earlier and open comments from my some of my colleagues in the us and ca debate that part of that would remove the protections of 230 and we would fall back to a regime by which the good d person could get punished, is that correct? everybody's kind of shaking there had mostly. mrs. peters, you're not. go ahead . >> just turn your mic on. >> we need to keep the 230 language out of the trade agreements . it's currently an issue here in the united states. it's not fair to put that in a trade agreement that will make it impossible or make it harder. >> don't belong, i want us and ca passed as soon as possible without getting encumbered where it doesn't happen. i'm not a proponent of trying to delay this process but i'm just trying to work through the debate. >> i made concern upstairs, we believe in legal process, that had been approved by the fda and we are concerned about black-market operation that would then use platforms illicitly to sell to underage kids. that would be how i would tie these two hearings together which again, i think is pretty interesting. when we had a facebook hearing a couple of years ago , i referred to a book called the future computer which talks about the ability of industry to set those standards i do think that industry, we do this across the board whether it's engineering of heating and air, cooling equipment or that really we have industry that comes together for the goodof the whole and say here are our standards . and the fear is that if this sector doesn't do that, then the heavy-handed government will do it which i think would really cause more problems. your shakingyour head . >> you've been saying to the industry you have to dobetter because if you don't somebody's going to do it for you. so you do it on your own terms or somebody else's.i agree . >> are not experts or that both thoughts about fairness, reliability, odyssey, transparency and accountability. i would encourage the industry and those who are listening to help us move in that direction on their own before we do it for them. with that mister chairman ideal back mytime . >> the gentleman yields and the chairrecognizes the chair for five minutes . >> it's a very interesting testimony and jarring in some ways. mrs. peters, your testimony was particularly jarring. and you seen any authentic offers of weaponsof being a sale online . >> i have not but we certainly have members of our alliance tracking weapons activity and i think what's more concerning to me in a way is the number of illegal groups from hezbollah, designated hezbollah routes to al qaeda that maintain webpages and link their twitter and facebook pages from those area and then run fundraisingcampaigns off of them . there are many platforms that allow for secret and private groups . those groups are the epicenter of illicit activity. so it's hard for us to get insidedoes. we run undercover operations to get inside some of them . >> mister barrett, in your testimony you talked about tech companies intentions and amount of time online on their platform on one hand on the and on the other hand moderation and if you talk about that briefly mark. >> we've been talking about 230 that important point but there's another thing which is the underlying business model of silicon valley is not to sell a product you are the product and in some ways is where a lot of attention is coming from because the metrics we use at the company for success is how many users and how long do they stay on the platforms . >> .. >> .. you are fed more and more about content down the rat hole so there is real tension there and the bottom line it's not just ideological. we are talking about problems. >> ms. young would you like to add to that? >> any of these issues we are discussing today whether it's harassment, extremism it is important to remember the positives and productive potential for the internet. it gets better. the scene counter messaging. we have a program called creators for change were able to create really compelling content for youth to counter extremist messages and i think it's just good to remember it was borne out of this committee and long-standing policy. it is relevant to foreign policy as well. it's responsible for the $172 billion surplus the united states has and it's critically important for all businesses to moderate content and to prevent censorship or mother more oppressiveve regimes. >> a great issue and it's hard to restrain yourself and i understand that but clearly companies could be doing more today within the current legalu framework to address problematic content and i'd like to ask each of the very briefly what you think can be done today with today's tools to moderate concepts.. very briefly please. >> for us the biggest challenge is evolving our policies to meet new challenges. we continue to do so into the future. for example two recent ones for us were expanding our harassment policy and banning for not graffiti. undoubtedly there willo be -- wasn't even aware two years ago so not only will there be new challenges in the future and being able to stay nimble and address them is really important. 230 gives us thegen space to at to these sorts of challenges. >> ms. citron. >> setting reasonable -- reasonableness standards is ensuring that we respond to changing landscape in the can have a checklist right now but i would encourage companies to not only have policies but be clear about them and to be accountable. >> dr. mcsherry. >> just quickly the issue for me was the reasonableness standard. as a litigator that's terrifying. that means especially for small business a lot of litigation escorts try to figure out what is reasonable. to your question one of the crucial things i think we need if we want better moderation practices and they want users not to be treated just as products is to incentivize alternative business models. we need to make sure wet cleara space so there is competition so when it give insight is that people go to other places to go with other practices than they are encouraged, other sites are encouraged to develop and evolve. that will make market forces sometimes can work and we need to let them work. >> i'm going to yield to the gentlelady from indiana for five minutes. >> thank you mr. chairman and thank you so much foror this vey important hearing. dr. farid actually to set the record and the reason i'm asking these questions on the former u.s. attorney and was very involved in internet crimes against children task force three we did a lot of work from 2001 to 2007 you were right pornography was not a term at that time so we certainly know that law enforcement has been challenged for now decades in dealing with pornography over the internet. and yet i believe that we have to continue to do more to protect children and protect kids all around the globe. a concept or it tool photo dna was developed along time ago ago to detect criminal on line child pornography get it means nothing to detect that illegal activity of the platforms don't do anything about it. we been dealing with this now for decades. this is not new and yet we now have new tools, so photo dna. it's as a matter of tools or effort or how is it that it's still happening? dr. farid. >> first of all i was part of a the team that developed photo dna with microsoft and for an industry that prides itself on rapid and aggressive development there has been no tool in the last decade that has gone beyond photo dna. it's truly pathetic on we are talking about this. how does an industry with so much innovation say we are going to use 10 year old technology to combat some of the most gut-wrenching heartbreaking behavior on line.wr completely inexcusable. it is not technological limitation. we are simply not putting the effort intoly developing and deploying the tools. >> and let me just share having watched some of these videos it is something you never want to see and you cannot get out of your mind. >> i agree. >> so i'm curious if you wanted to respond and how is it that we are still at this place? >> thank you for the question. i will say google that is not true at all. we have never stopped working. we can always do better but we are constantly adopting new technologies. we initiated one of the first ones which enabled us to create digital fingerprints and imagery and prevent it from being youtube and there's ant tool and we are sharing it with ngos. they resulted in a seven times increase. it's going to continue to be a priority but i just wanted to be clear that from the top of our company we need to be a safe and secure place for parents and children and we will not stop working on this issue. >> i'm very pleased to hear that there has been and you are ouaring them and that's important however i will say the indiana police captain chuck owens who hasas testified before the energy and commerce recently toldpo me one of the issues that law enforcement runs into when working with and accompanies is an attitude he calls minimally compliant and she says internet companies will frequently not preserve content that can be used for investigation of law enforcement and make companies awareet of concerning materials for flag that content without checking if it's truly objective or not. do any of you have thoughts specifically on his comments? he has been an expert. do any of you have thoughts on how we know what this law enforcement critical need because they are saving children all around the globe. without restricting concerning content. >> i just feel like if companies start getting fined or some sort of payment at -- punitive damage every time there's illicit, and we will see a lot less terrific content very quickly. it should be illegal to post it on line and it's a very simple approach that i think we could apply. >> i have a question particularly because i asked mark zuckerberg is relative to the terrorism and recruitment in isis and now we need to be more concerned about isis. i understand that you have teams of people that take you down. how many people are in your team mr. huffman? >> dedicated to removing content about 20% of our company. >> about 100 people. ms. oyama how many people? >> more than 10,000 people. >> that actually remove content? >> involved in the content development. >> how many are in the team that actually do that work? >> i'd be happy to get back to you. >> thank you today yield back. >> the gentlelady yields back at this point i would y like to introduce a letter for the record. without objection, so ordered. >> the chair recognizes the gentlemanrd from new york mr. clark for five minutes. >> i think our chairman and chairwoman and ranking members for convening this joint subcommittee hearing today on fostering a healthier internet protect consumers. isu introduced the first house bill on the deep faith accountability act which would regulate video. deep fakes can be used to impersonate candidates and create revenge porn and the notion of what is real. ms. oyama and ms. huffman what are the implications of section 230 on the deep fakes policy? >> thank you for the question. we released, actually think most of our peers around the same time prohibit deep fakes pornography on read it ache as we saw this new emerging threat and we wanted to get ahead of it as quickly as possible. the challenge we face of course is a challenged you raise twitches the increasing challenge of being able to detect what is real or not. this is where we believe radix model actually shines. by empowering her youth and committees to adjudicate every piece of content they often highlight things that are suspicious notches videos and images. i do believe very strongly that we as a society not just a platform but in addition have to develop defenses against this sort of manipulation. >> ms. oyama. >> on youtube our overall policy is a policy against deceptive practices and their instances when we have seen these deep fakes. the speaker pelosi is one example. we identified it as the deep fake and it was removed from the platform.fy accurate authoritative information is our long-term incentive that iio would agree with what mr. huffman said one oful the things we are doing is investing deeply on the academic side and the research side to open up datasets where we know these are deep fakes and get that her of being able to identify it and manipulated. we have the revenge porn policy to the users are notified that we did expand that to deep fakes in that area as well. >> did you discuss the implications of section 230 on deep fake removal? >> section 230 the activities that you two been breaded engage in r&d activities that are proactive in the space of clear illegality. movingnd quickly but the real problem isn't these. there the trace labs showing eight out of 10 of the biggest sites have deep fakes sex videos and therefore sites that basically their business model is deep fakes videos. >> does the current immunity structure reflect the unique nature of this threat? >> i don't think section 230 as it is devised at its best it incentivized the nimbleness we have seen for some dominant platforms. it's not the way the language is written under 231. dessing condition immunity on being responsible and reasonable so you have these outliers to cause enormous harm because a deep fake video is findable and people contact units terrifying for victims. if the outlier companies that this business model is abuse. they gleefully say too bad, so sad and that's the problem. >> very well. one of the many issues that has become an existential threat is the right of topic and on social media platforms. ms. oyama at of 230 were removed would platforms be liable and would it change their incentive around moderating speech? >> thank you for the question. i think this is an important area to show the power is important. as youth know there are restrictions on government regulation of speech and there's additional responsibility for service providers like usgo in e does above.or we have a policy against deep fakes and incitement to violence is prohibited. speech targeting hate at specific groups based on race, religion age and takedowns that we do through automated flagging their machine learning or human reviewers are lawful and possible because of 230. when we take down content someone's content is being taken down so they can come back and they may suei them for defamatin or other things. i think looking at the small business interest and it's very important as well. they would say they are more deeply reliant and innovate new identify that content without fear of unmitigated litigation.th >> very well. thank you very much and i yield back. madamat chairwoman. >> the gentlelady yields back and now mr. walberg you're recognized for five minutes. >> i think the chairwoman and i appreciate the panel being here. today'se hearing and the issues at hand hit home for a lot of us as we have discussed here. the internet is such an amazing, amazing tool. it is brought about great innovation and connected billions of people in ways that we would never have thought of before. truth the way we look forward to what we will see in the future but these are issues we have to wrestle with. earlier this year is pleased to invite haley petrosky from my district at the state of the union as my guest to highlight her good work that she is doing in mygu district and surrounding areas to help combat cyber bullying. it's very much comprehensive individual who understands so much as a young person what's going on is having a real impact in high schools and colleges now as a result of her experience in trying to attempt too make positive things out of it after she almost committed suicide. thankfully wasn'tnd successful s a result of cyberbullying. she shined a light on that so mr. huffman and ms. oyama what are your companies doing to address cyber bullying on your platform's? >> things for the question congressman. two weeks ago we updated our policiesue around harassment. it's one of thehe i think most complex or nuanced challenges we face because itt. appears in may ways. one of the big changes we have made is to allow reports not just from the victim but third parties so what someone else cease the harassment and they reported to her team so we can investigate. this is a nationwide issue that particular platform when people come to us in times of need. for example a teenager struggling with their own sexuality and no place to turn. maybe not their friends or their family and theyy come to platfom like ours to talk to others in difficult situations or people who are having suicidal a thougs come to our platform. their firstto priority regardles of the law. we fully support lawmakers in this initiative to make sure that those people havee safe experiences on read it. we have made a number of changes and we will continue to do so. >> ms. oyama. >> thank you for the question. harassment and cyberbullying is prohibited so we wouldld use our policies to helpp us move forwad and either do augmented protection human flagging community flagging to identify that content and take it down. last quarter we removed 35,000 videos under the policy against harassment and bullying and i want to echo mr. huffman's perspective that internet and content sharing is a really valuable place. it can serve as a lifeline to a victim of harassment and we see that all the time. someone may be isolated and being able to reach out across borders to another state or to find another community has really created a lot of hope and we want to continue to invest in mental health resources. >> i'm glad to hear you are both willing to continue investing in helping us as we move forward in this area. ms. oyama google's ad network has come a long way in the last few years and won't serve ads with illegal activity. this is laudable and demonstrate school has come a a long way in identifying illegal activity. given that google is able to identify such activity why would it not just take content questions? that was for ms. oyama. >> it is true on our ads system we have a risk engine so we prohibit legal content. there are many different policies out of the ad network for violating those policies. see b so you are taking them down. >> absolutely before they can engage. it's very squarely in line with interests.ess we want people to know that her networks are safe. our advertisers only want to be serving good content. >> i understand that google offers a feature to put a tag on copper and support that would automatically take down and pirated and uploaded that google charges a fee for this. this technology be applied to other legal content and why doesn't google offer this tool for free? >> thank you for the question. i think it may be a misperception because we do have content i.d. which is our copyright system. we have partners across industry film. it is part of our partner program and is offered for free. doesn't cost the partners anything if they are revenue generators. last year we spent $3 billion based on content i.d. of copyright material but they claim. they were able to take the revenue associate with that content. that system of being able to identify and detect algorithmically to then set controls whether it should in the entertainment space monetized and served air in the case of violent extremism absolute block or that something that we do. >> i yield back. >> the gentleman yields back. >> thank you madam chair. you wantd to thank chairman schakowsky in the two ranking members of the subcommittee's for holding this hearing today and they want to thank the witnesses for your attendance as well. it's been verythit informative n if we are not able to answer all the questions. it's not the first time our committee has examined how social media andn the internet can be innovation and human connection which we all enjoy when we are making those connections so long as they are positive but also a vector of criminality. i think everyone assembled here today is clearly expert in your field and i appreciate hearing from you all today as we consider it. section 230 is then interpreted and what if any changes we should be considering. i think there's a lot to consider as we address the full scope as flood section 230 covers from cyberbullying or hate speech from facebook youtuber elsewhere to the illicit transaction of harmful substances or weapons. i think the question today is twofold. first we must ask if content moderators are doing enough and second we must ask whether congressional action is required to fix these challenges. the second one is then referred to obliquely throughout by some of us and i think that's essentially the seconded questin that we are really facing today. after reviewing the testimony you submitted we clearly have some differences of opinion of whether section 230 should be focusingha its resources. to begin i'd like to ask everyone the same question. this is probably thee easiest evquestion to answer the most difficult because it's exceedingly vague. what is the difference between good and bad content moderation look like starting with you mr. huffman. some have thank youd congressmn for that philosophically impossible question. but ing think they are a couplef easy answers that i hope everyone will agree with. bad content moderation is problem and that was the situation weat were in. 230. there were perverse incentives we were facing. there are good kinds of content moderation. what is important to us that read it is twofold. one is empowering our youth and communities to set the standards of discourse in the communities and the monks themselves because the only truly skilled and the second is what 230 provides us which is the ability to look to our platform to investigate to use some vanessa nuance when we are dressing thesee challenges. >> what makes content bad or is it -- >> moderation. what's the difference between good and bad content. moderation. that's a we are talkingng about. of course but it precedes the question of why we are here. why should we even talk about changing section 230 and what is troubling is when sites are permitted to have entire business model which0. is the abused. that is the worst of the worst and it solicits illegality in harm. >> that's the problem. >> i've got some answers for you >> you can submitow them to us n writing. >> i did in my testimony. >> thank you for the question. i think good content moderation is transparent and careful. what we see far too often is that in the name of content moderation and making sure the internet is safe for everybody all kinds of valuable and lawful content is taken off-line. i just point to one example where we have an archive of videos. those videos are often flagged as violating the terms of service because of course they contain horrible material. the point is to support political conversations and it's very difficult for the subscriber to apparently tell the difference. >> thank you. ms. peters. >> if it's illegal in real life about to be a legal on line. i think there has been little investment and technology that would improve this for the platform precisely because of section 230 immunity. >> i do realize i'm out of time. i'm sorry to have such a broad question but i would like to get your response of the other two witnesses in writing if ih coud please. you so much and i yield back. >> the gentleman yields back and now i recognize mr. carter for five minutes. >> thank you madam chair and thank all of you for being herei i know that you all understand how important this is that i hope and i believe you all take it seriously. thank you for being here and thank you for participating. ms. peters i'm going to startta with you. and like to ask you in your testimony you pointedyo out thee is clearly quite a bit of illegalnd conduct that the on le platform still is hosting their friends and illegal pharmacies where you can buy pills without a prescription. terrorists that are profiteering in all sorts of looted artifacts and also products from endangered species and it even gets worse. you mentioned the sale of human remains and child exploitation and gross things ifed you will. how much effort do you feel like the platforms are putting into containing this and stopping this? >> it depends on the platform but that's a very good question. i'd like to respond with a question to you in the committee when was the last time anybody here saw -- on facebook? simple question. they can keep jenna tell you about these platforms they can keep child sexual abuse off of these platforms for the technology, these are policy issues where the policy is meant to allow a video of nancy pelosi or policy to allow pictures of human jenna tell you. >> i understand. do you ever go with them and meet with them and express this to them? >> absolutely. >> how are you received? >> the firm has quite intelligent people working on ia in their creating ai and within two years ai will work in leap presented evidence of identify out crime and terror networks we have been told they will get back to us and they don't. that has happened multiple times. >> are you ever told they don't want to meet with you? >> we have usually had meetings or calls. you feel like you have a good relationship and a is the effort being put forth? >> i don't feel the effort is being put forth. >> that's where i struggle. i'm doing my best to keep the federal government out of this. i don't want to stifle innovation and i'm really concerned about that but at the same time look we cannot allow this to go on. this is a a responsible. if you don't do it you will force us to do it for you and i do want that to happen. it's just as clear as that. let me ask you ms. peters you mention in your testimony that you were getting funding from the state department on supply chains and that's when he discovered there was a large retail market for endangered species that exist on some platforms like facebook. have any of those platforms made a commitment to stop this and if they have is that working? is he getting any better? >> that's a terrific example to bring up . there are a number of tech firms that have joined a coalition as a wildlife fund and have taken a pledge to remove endangered species content and wildlife markets from their platforms in 2020. i'm not aware that anything has changed. we have researchers going on line and blogging wildlife markets all the time. >> i'm going to be fair and i'm going to let, i'm sara can see that far. i'm going to let you respond to that. are you feeling like you are doing everything you can? >> we can always do more. we are committed to always doing more. >> i appreciate that and i know that. i don't need you tell me that. and he did tell me we have a plan in place. let me tell you what we are doing. for wildlife the sale of in endangered species we are part of the coalition that ms. peters mentioned. the national epidemic you mentioned for opioids we are hugely committed to helping them playing our part. there's an on line and off-line component. the on line component research has shown that 0.5% of misuse of opioids originate on the internet and what we have done especially with google search his work with the fda. the fda consent is warning letter if they see there's a search for real pharmacy and we will set up the search. there's an important off-line component. we work with the dea. we feature it on google maps. i'm happy to come in. >> i invite you to do just that. i would like too see see you in talking further aboutut this. mr. huffman i'm going to give you the opportunity because my staff has gone on reddit and they have googled a a few well researched for illegal drugs. i suspect you are going to tell me the same thing. we are working on it. we have almost gotot it under control but is still coming up. santa got a completely different answer if you'll indulge me. first of all it is against the rules to have controlled goods on our platform and it's also illegal.av 230 doesn't o give us protection against criminal liability. we do see content like that on our platform. in fact if you goo to any technology service with the search bar including your own e-mail for adderall i'm sure you'd find a hit. that's the case on reddit as well. a sort of content some say is spam person is removed where filters but there's a lack of something being submitted and being removed. that said we do take this issue very seriously. our technologies continue to improve along thesese lines and that's exactly the sort of ability that to 30 gives us the ability to look for this content and remove it. to the extent that you or your staff have found this content specifically and to the extent that it still our platform we'd be happy to follow up later. >> my sons are grown now but i feel like a parent pleading with their child, please don't make me have to do this. thank you madam chair. i yield back. >> the gentleman yields back and now it recognize congressman kelly for five minutes. >> thank you madam chair. thank you for all that is important hearing on section 230 and fostering a healthier consumer friendly internet. their intended purpose of sectin section 230 would allow companies to moderate content. to good samaritanan provision ad yet this law seems to be widely misapplied. a good samaritan provision to 30 was done in good faith to accessibility to find materials that they user finds to be obscene root excessively violent harassing or otherwise objectionable whether or not that material is constitutionally protected. last congress section 230 was make platforms liable for anyd. activity related to sx trafficking. in the past summit besides the love for being too ambiguous. in my edition -- an addition to our been those -- i chair the caucus and in that capacity i worked with stakeholders and accountable manner while allowing innovators to an innovate. it is my hope it will set the standard of doing so in a responsible effective and balanced way. ms. citron you discussed content moderation practices writ large are reasonable. as the chairman referenced how should companies know where the line is or if they are doing enough and where is that line? >> it's the genius of reasonableness that it matters by context. there certainly baseline assumptions about what would constitute reasonable practices and that includes having them. they absolutely don't engage in moderation and to encourage illegality. there are some des line academic writings for the last 10 years. there is a baseline set of policies that we have seen that are best practices but actually that's going to change depending on the challenge. we are going to have different approaches to different new and evolving challenges and that's why reasonableness preserves the liability shield but it doesn't in exchange for those efforts. >> would you agree that any change we make we have to ensure that it doesn't further ambiguity? >> right and if i may what was disappointing if someone who helps offices work on the language. when they recruit a language that knowingly facilitates that's a the moderator's dilemma. to either sit on your hands or be really aggressive. my biggest disappointment was importantly how it came out. the tuc ourselves back to those initial cases and either we have seen overly aggressive responses to sexual aggression on mine which is a shame and we have seen doing nothing. i hope we don't do that. >> away people communicate is changing rapidly as we all know. you can go viral very quickly. 2016 election showcased how false information can spread and how effective it can be to motivate different populations. often offensive content is shared in groups and then filtered out to a wider audience as leaders what to believere is the responsibility of tech companies to monitor and proactively remove content that is rapidly spreading before being flagged by users? >> i i believe companies need to moderate and remove content when it it is concerning and it's clearly illegal activity. if it's illegal in it's illegal and will like it ought to be a illegal on line. drug trafficking, human trafficking, wildlife trafficking serious organized crime and designated terror groups should not be given space to operate on our platform. i alsoig think cda to 30 needs o be revised to provide more protected the first civil law enforcement, and sorry state and local law enforcement to have the legal tools to respond to illicit activity. that's one of the reasons. ms. oyama and mr. have and what steps are taken beyond machine learning. their flags at pop up with thext same content to share? >> thank you for the question. on youtube we are using machines and algorithms. once content is identified and removed our technology is uploaded to tear a really important point about work across platforms and in cooperation a good sample with the global internet counterterrorism. we are one of the founding members and many of the leading players in tech arere part of tt thread one of the things that we saw during the christchurch shooting was how quickly this content can spread. we were grateful to see that last week some of the crisis protocols are put into place such as the shooting in germany, there was a piece of content that appeared and companies were able to engage in a protocol. it was spread across the company said that enabled all of us to market. >> i am out of time. thank you. >> the gentlelady yields back and mr. bilirakis is recognized for five minutes. >> thank you madam chair. it should should have very much. my first question is for dr. mcsherry, yes or no. understand the past pff includes language mirroring legislation in trade doses was simply for the purpose of baking language in to protect statute domestically. do you see the intent of including such to 30 like language and trade agreements is to ensure that we may not revisit the statute? >> no. >> okay. thank you very much. what i would like to do madam chairman like to ask that eff the blog posted on january 23 of 2018 by jeremy malcolm be entered into the record. >> without objection, so ordered. thank you madam chair. appreciate it. the next question is for mr. huffman and ms. oyama. i want to say it right. is that okay? very good thank you. april 22 my question mark zuckerberg about how soon illegal opiate ads would be removed on their web site.t his answer was that the ads would be reviewed when they were flagged by users as being illegal or inappropriate. this of course is a standard answers to the social media space however mr. zuckerberg also said at the time industry needs to and i quote build the tools and proactively go out and opioids before people have to flagged them for us to review and that ends the quote. this would significantly in my opinion cut down the time and illegal ad down their web site. againon mr. huffman and ms. oyaa it has been a year and a half. this is an epidemic and people are dying. i'm sure you agree with this. has the industry been actively working on artificial intelligence flagging standards that can automatically identify illegal ads and what is the status of this technology and when can we expect it? whoever would like to go first is fine. >> thank you congressman. reddit all of our ads go through strict human review process. making sure that not only are they on the right side of her content policy which prohibits the buying and selling of controlled substances but also a much more strict ad policy which has a much higher bar to cross because we do not want as the cause any sort of controversy on our platform. >> we have to be proactive as far as this is concerned and mr. zuckerberg indicated that is the case. you know these kids are dying and people are dying and we can't just stand by and have this happen and have access to these. most cases opioids and drugs in different types of drugs but ms. oyama would you like to comment please? >> we certainly agreepe with yor comment about the need for proactive that ertz. we have something called a risk engine that helps us identify if an ad is coming to the system. last year in 2018 we kicked out 23 billion ads out of her system for violating her policy. for any prescription that would show up in an ad that is independently verified by and in dependent group called the jet script and that would need to be verified by them and in the specific case of opiates those are a controlled substance under law.al there a lot of different things are done with the at the end the dea and pharmacies like cvs off-line to help them promote things like take back your drugs they were people can take opiates and drop them off if they are going to use them. one of the things we have seen is the vast majority more than 99% of opioids abuse happens in the off-line world so from the doctor prescribing it or a family member or friend. using tech elegy ton educate ad inform people that might be potentially affected would be important. >> how about anyone else on the panel? is the industry doing enough? >> i don't think the industry is doing enough. there's an enormous amount of drug sales taking place on google groups, on instagram and facebook groups. the groups on these platforms at the epicenter and this is why industry has to be monitoring them. if you read this substitutee users inside of the private or sicker group is not going to happen. these firms know what users are up to. they monitor all of this all the time. they can figure it out. congressman can i add their two issues here? there's the ad but also the content. you heard ms. peters say she wasn't't at reddit and the contt is there even if it's not in the ad. there are two places you have to worry about. thank you madam chair. i yield back. >> the gentleman yields back and now i call on the chairman of the full committee for five minutes. >> thank you madam chair for the one to start with ms. oyama. written testimony discussed youtube's committee guidelines for hate speech and i'm concerned about news reports that hate speech and abuses on the rise of social media platforms. how to section 230 incentivize platforms to moderate such speech and does section 230 incentivize platforms to take a hands-off approach to removing hate speech if you will? >> thank you so much for the question. on the category of hate speech we have a very clear policy. that would be speech that incites violence or speech that is hateful against groups with attributes based on their race, their religion, their age, their disability status and their veteran status. that is prohibited and can be detected by her machines which is the case and 87%, by community flaggers and individual users. all of the actions that we take, last quarter we saw five-time increase in the amount of content that are machines were able to find. those removals are vitally dependent on the protection of cda 230 to give service providers the ability to moderate content had to take it down. we do have claims against us. people may sue us for defamation.o they may have other illegal claims in 230 is what enables not only google or youtube at any site with user comments, and a site on the internet large or small to beer able to moderate . i think we would encourage congress to think about not harming the innocent actors that are taking steps. in effort to go after a truly bored -- bad criminal fully exempted from the scope of pda 230 and they should be penalized law enforcement will play really importantul role as they did wih back page that was taken down where there is platform liability for better years. >> dr. farid britain's testimony you stayed international terrorism criminal and civil liability associate with providing material support for terrorism. start with dr. mcsherry understanding section 230 doesn't apply to federal criminal law have you associate them media companies used section 230 to shield yourself from civil liability andse allowing a platform to be used as propaganda and a recruitment platform for terrorist when it comes to civil liability? that their ongoing cases and there've been several cases where platforms have been accused of violating the law for posting certain content on their platform. in those cases i think if you look at the fact a lot of cases it's quite appropriate. the reality is it's very difficult for platform to be able to tell in advance do we draw the line in advance between content that is simply protecting communication and content that steps over the line these cases are hard and complicated in they have to get resolved. section 230 creates a space in which because of the additional protections it provides creates a space for service providers when they choose to moderate. >> going back to.or farid do you have any thoughts on how this should address from a technological perspective? >> i want to start by saying when you hearw about the moderation and we have heard from google and we have heard fromt reddit, that's from intee pressure from advertisers, pressure on capitol hill and his come from pressure from the eu and the press. there is bad pr and then we start getting serious. for years we have been struggling with social media companies to do more about extremism and terrorism on line. the eu started putting pressure on and capitol hill and advertiser started putting pressure on and we started getting responses. that's exactly what this conversation is about what is the underlying motivating factor? the pressure has to come from other avenues and that means putting pressure by modest changes to cba 230 is the right direction and i'd leave i agree with ms. oyama good actors should encourage that change and help us deal with the problems. i've been in this fight for over a decade now and it's a very consistent pattern. minimize theis extent and eventually you get enoughi pressure. i think we should escape to the in part and recognize we need to start doing better. >> thank you madam chair. >> the gentleman yields back and i recognized for five minutes congressman forte. >> thank you for being here today. 20 years ago i harvested power the unit to launch a business improve customer service for that company is called right now technologies and from a spare bedroom in our home we eventually grew that this is to be one of the largest employers in montana. we have about 500 high-wage jos they wage jobs there for could the platform we created had about 8 million unique visitors per day. i understand how important section 240 can be for small business. this important liability shield has gotten mixed up however with complaints about viewpoint discrimination. i want to b cite one particular case. in march of this year missoula-based rocky mountain elk foundation comes down to my office because google had denied one of their advertisements are the foundation did what it had done many times. they tried to use paid advertising on the google network to promote a short video about a father hunting with his daughter. at this time however the foundation received an e-mail from google and that quote any promotions about hunting practices even when they are intended as a healthy method of population control or conservation is considered animal cruelty and deemed inappropriate to be shown on our network. the day i heard about this i sent ae letter to google and yu were very responsive but the initial position taken wast absurd. hunting is a way of life in montana and many parts of the country. i'm very thankful that you worked quickly to reverse that but i remain very concerned about google's effort to stifle the promotion of rocky mountain elk foundation and how they were treated. i worry that other similar groups have faced similar efforts to shut down their advocacy. we really don'tth know how many hunting as google has blocked in the last five years. in my march letter i invited google ceo to meet with leaders of our outdoor recreation businesses in montana. i haven't heard anything back. ms. oyama i would extend the invitation again. i think frankly it would help google to get out of silicon valley come to montana and sit down with some customers and hear from them directly about the things that are important to them. i'd be happy to host that visit. love to meett with you there. i think it's important to understand the work that these groups do to further conservation and to help species thrive. as an avid hunter and outdoorsman myself i know many businesses in montana focus on hunting and fishing. i worry they may be denied the opportunity to advertise in one of the largest on line platforms i also worry and over burdensome regulatory regime could hurt small businesses and stifle montana's rapidly growing high-tech sector. the invitation is open. dr. farid one question for you. how can we walk this line between protecting small business and innovation versus over burdensome regulation? >> i think we need to be very careful here because right now we have a near monopoly in the technology sector that we start regulating it now small companies won't be able to compete for their ways of creating carve out and beauty in uk as they talk about regulations they are creating carve out for smallco platforms. i do think we want to tread lightly here. i think we want to inspire competition for better business models.al i think there are mechanisms to do that. >> we have had a lot discussion today about the efforts are taking ticket colonel activity often networks and i applaud that we should continue to the do that but as a follow on doctor how do we ensure that content moderation doesn't become censorship and a violation of our first amendment? >> the way we have been thinking about moderation is the collaboration between humans and computers. computers are very good at doing the same thing over and over again but with their notot goodt is inferencing context. the way content moderation works for example a human moderator says it's explicit and we fingerprint the content and removed very specific to that piece of content. that is the skill you need to be operating at so if you're going to automate technology you have to operated a high scale. computers can't do it all also we need more human moderators. you heard from google 10,000 moderates. their 500 hours of video uploaded a minute. that is not enough moderators. you can do the arithmetic yourself. moderators to get hours and hours of video per hour so we have to keeplf up with moderati. >> i look forward to seeing you in montana and i yield back. cf. the gentleman yields back and now i recognize congresswoman blunt to rochester for five minutes. saying thank you madam chairman into the chairman in bacon member thank you for holding this important hearing today think many of us here today are seeking to more fully understand how section 230 of the communications decency act can work well in an ever-changing virtual and technological world. this hearing is really significant and as ms. oyama said i want isgi to not forget e important things that the internet has provided to us from application -- but also as mr. huffman said we and you applied this to reddit but i think it applies to all of us must constantly be evolving art policy to face the new challenges while also balancing our civil liberties. we have a really important balance here. my question is really surrounded around the question that mr. loebsack asked about that content moderation and i want to start off by saying the utilization of machine learning algorithms and artificial intelligence filtered through content posted on web sites as large as you too provides an important technologicalal solutn to increasing the amount of content to moderate however as we become more and more reliant onth algorithms we are increasingly finding line spots may be difficult to reach was simply more and better code. i think there is a real concern for groups facing prejudice and discrimination will be further marginalized. as i thought about this i even thought about groups like veterans or the african-american committee and the 2016 election. dr. farid can you describe some of the challenges of moderation by algora them excluding possible bias? .. >> we automate at the scale we will have problems and we've already seen that. we know the >> the problem with the automatic moderation is that it doesn't work at scale. when you are talking about billions of uploads and if your algorithm is 99 percent accurate, which is very very i'm going, you are still making one and 100 mistakes. that is literally tens of millions of mistakes you are going to be making at the scale of the internet. the underlying idea that we can continue fully automate this and not take on the responsibility and expect much moderators, sibley doesn't work. i will live to far and to give us time to find the algorithms, because we don't want to hire the human moderators. we know today that that isn't going to work the next two years, five years, ten years. it's a little bit worse than that because it also assumes an adversary that is not adapting. we know the adversary will adapt. we know for example that all machine learning and ai algorithms that are meant to identify content are vulnerable to attacks. you can add small amounts of content to the intrusion and you can completely full the systems. >> want to ask a quick question. we talked about the number of a few months to have available to you, and i know that we've had monday hearings on challenges of diversity and tech field. i am assuming this to happen, pictures or more from the user perspective in terms of moderators or are they people that you hire in the 10000 or so they you mentioned, these are people that you hire or are they the quick so everybody knows. >> is an accommodation. >> is about hundred employees. millions of users. >> so the 10000 that i mentioned is the full-time employees. we work with professional mentors and community fighting and it could be law enforcement and an average user. >> in the interest of time, i don't have a lot of time but it would be or could you provide us with information on the diversity of yourha moderators. >> is one of my questions and thenin also, and i like to make assumptions but i'm going to assume that it might be a challenge to find diverse populations of individuals to do this role what you are doing in that vein. if we can have a follow up with that my lastua question is just going to be for the panel, what shall the. what shall we be doing to help in the space. i am really concerned about the capacity to do this and do it well. if anybody has any suggestions, or recommendations. mr. friede is already pushing his button. >> i think this conversation is helping. i think you are going to scare the be jesus out of the technology sector and i think that's a really i'mis going thig to do. >> i feel that amount of time thank you so much to all of you rework. >> last but not least, representative sato, you know recognized for five minutes. >> thank you madam chairwoman, personal thank you for being here. unless one so you are in the home stretch here. it is amazing that we are here today. when we think about how far the internet has progressed in one of the greatest inventions in human existence and connecting the world and givingr billionsf voice while before their stories would never be told providing and knowledge at ourio fingerti. it is just incredible. we know section 230 has been a big part of this. providing that safe harbor against the dam that essentially the dam holding back a flood of lawsuits. created innovations but it's also created a breeding ground for defamation and harassment for impersonation and election interference and also breeding ground for white supremacist. this information in global terrorist and extremist. we have this wonderful side and then all of that terrible things with humidity on the other side. my biggest concern is that lives widespread faster than live. the internet while true seems to go at a snails pace that is one thing that i causally here for my constituents. so i want to start with some basics just so i know everybody's opinion on it. did you think that the cop on the beat, in the primary enforcer with the choices being sec or the courts. i'd like to know what each of you think on that. >> if those are my only three options,or. >> you numa forth if you like. >> in united states society and our platform. >> i'm going you think question brent. >> the courts. it forces in some sense the companies actually to see the norm producers. >> the courts have a very important role to play but also are no principal is that at the end of the day, user should be able to control their internet experience. monday more tools to make that possible. >> i think that's a ridiculous argument. the vast majority of people, i've studied organized crime. all that i was going to made it through it. law enforcement most people are i'm going, small percentage of people statistically in any community, commit crimes. you have to control for it. >> multi- stakeholder approach but i wanted to see the courts and the ftc do have jurisdiction is you know the fcc has a broader already and courts are always looking at the outer conference. >> doctor frank. >> a great multi- stakeholder, we all have a responsibility here. >> and ifre we we're to tightenp the rules in the course it would be great to hear first starting with you doctor freight if we limit to junction relief, do you think that would be enough and both of or not there should be attorneys fees estate. >> is understand i am not a lawyer or policymaker.et i'm not the one who should be answering that question with due respect. >> mr., and stratman. would relate in the course be enough to change certain behaviors to think. >> i just think i said that course to have the power. i would want to echo the small businesses and startup voices where they do see that the framework has created 70 and that is essential for the economic viability. >> similar a third made it through sure. i would shudder to think if we we're we we're smaller, the reason we are nowow on the receiving end, of lawyers. >> is you see, all i can see is the first amendment in prior restraintsul so i need to be sot of careful in the kind of remedies that we think about but law operates if we allow it to operate, if people act with unreasonably and t recklessly, d i think the array of possibilities should be available. >> leslie want to talk a little bit about 230, section 230 is far is being in our trade deals. i'm for orlando, the lead work fictional wizard are two of our greatest assets. ms. peters, i know you talked a little bit about the issue of 230 and how that could be problematic for a region like ours where intellectual property is so critical. >> is problematic because essentially it's we do type congresses hands from reforming the bill deadline. that's precisely why industries you is pushing to have it inside of the tree deal. there are 90 pages of copyrighted pages. it can be treated the same is the u.s., and doesn't find congress hands at all. >> that would affect thew trade deals is your opinion then. >> there is no language in the trade deals that might congress. they have regular hearings on copyrighted goal. there is nothing in the trade agreement is something which of the u.s. law to create a u.s. framework when when countries like china and russia are developing their own frameworks the internet. there is nothing in the current u.s. cna or japan ca, that would limit your ability to later look at 230 and decide that it need tweets later on. >> thanks i go back. >> general o'neill's back that concludes our. for questioning and i think unanimous consent to put into the record a letter from creative t-shirt with attachments. a letter from american hotel and lobby associations and letter from mr. technology and associations, a letter from travel technology associations, a white paper from airbnb, a letter from common sense media, and letter from computer and communications industry associations, a letter from representative grace, a letter from and support of the plan act, a letter from the i is it too, and the fcc, representative jan. [inaudible conversation] , a letter from an association, letter from like a maniac, a letter from the motion picture association, an article from the verge titled surfing for help, a statement from our street, that went out objection the so ordered. and let me thank our witnesses. i think this was a really usefu- hearing. i think those of you who have suggestions, more concrete ones that came up today, our committee would appreciate it very much, after the joint committee would appreciate that as well. the searing and so i want told thank all of you so much for your thoughtful presentations and for the written testimonies which also often go beyond what we are able to hear today. so i want to remind members to additional questions for the record to be answered, eyewitnesses who have appeared. i want to ask witnesses to please respond promptly to any such questions that you may receive. at this time, the committee and committees, are entered. thank you. [background sounds] we have more campaign 2020 cambridge coming up tomorrow with democratic presidential candidate bernie sanders. the independent vermont senator coles rally in new york city. it is expected he will be endorsed by congresswoman alexandria cortez. who will speak at the if it as well. what are live coverage saturday at 1:00 p.m. eastern arc opinion network cspan2. online at cspan.org or you can listen live with the free cspan2 radio app. and next week, facebook ceo and cofounder and more zuckerberg testifies on his companies we broke go crazy project. how's unnatural services committee hearing, watch live wednesday at 10:00 a.m. eastern is he spent three and online cspan.org or listen live, with the free cspan2 radio app. otb has live coverage. the wisconsin festival for madison, starting saturday at 1130 eastern. featuring woman diplomat, offering her thoughts on combating terrorism. democratic political strategist donna brazil, refax on her career. author a national book festival literary director maria ironic, provides a history of that in america. megan recounts growing up is a member of the westborough baptist church. watch our live of the wisconsin book festival saturday starting at 11:30 a.m. eastern. and be sure to catch the texas book festival, in october in miami book fair in november on book tv and cspan2. this saturday, and american history tv, we are featuring political cartons. at 8:00 p.m. eastern, the influence of american cartoons, in world war ii. >> and their fingers up for the v for victory sign in there staying the american big eagle flying. >> and at ten, new america, on the 75th anniversary in the 1944 presidential debate. the animated short that for election. >> trumpet playing. forty-four is the singletrack you know. come >> yet but sam are the american people on the board. >> sunday at 6:00 p.m. eastern, un-american artifacts during the annual heritage days in carlisle, pennsylvania. we visited a world war ii u.s. army battalion aid station. >> this is the mobile member just a room. like any emergency room, what we do is we assess and we treat and we stabilize and then we get them out. get out of my er. >> candidate, on the presidency. interests and contributions of first lady at nixon. fifty years later. >> we're very conscious of this women's moving movement. issues politically, republicans losing some ground on this. democrats we're supposing legislation and build the support and she worked very closely with the office of women's issues in the white house to help it get it more appointments of women in the federal government. >> a foreign nation his pass. un-american history tv. every weekend on c-span three. >> nl u.s. citizen ship acting director ken cuccinelli, discusses innovation and enforcement issues at a breakfast hosted by the christian science monitor. there's a range of president questions on border security and upcoming supreme court case on deferred action for childhood arrivals or daca. from earlier this week, this is about an hour. [background sounds] >> i'm going morning i'm linda

Related Keywords

New York , United States , Montana , Japan , Washington , Vermont , Brazil , China , Florida , Whitehouse , District Of Columbia , Indiana , Wisconsin , Togo , Canada , Russia , Germany , Mexico , Berkeley , California , India , Pennsylvania , Ohio , Capitol Hill , Americans , America , Russian , American , Mister Barrett , Jeremy Malcolm , Mister Truong , Al Qaeda , Mike Pierce , Mister Hoffman , Ken Cuccinelli , Oyama Google , Bernie Sanders ,

© 2024 Vimarsana

comparemela.com © 2020. All Rights Reserved.