Transcripts For CSPAN3 Experts Testify On Big Tech Accountability - PART 3 20240708

Card image cap



we are going to recognize mr. volokh. unmute yourself. >> can you hear me? can somebody pull up the powerpoint? ok. are the powerpoints up by any chance? chairman doyle: i think that our staff is putting it up. let's hold on a second here. ok, we are going to get started. we are still trying to get that up. you can start your testimony. mr. volokh: thank you so much for inviting me. it's an honor to be asked to testify. i was asked to be technocratic here, to talk about the particular language of some of the bills. and identify the things that may not be obvious about them. i will start with the justice against malicious algorithms act. one important point to think about is that it basically creates a strong disincentive for any personalized recommendations that a service would provide, because it asserts immunity for recommending information if the provider -- can i see that please? if the provider is making personalized recommendations, and such a recommendations lead to severe or emotional injury. that means it would be a huge disincentive for youtube and those kinds of entities, from giving recommendations based on information about you, about your location, about your past search history, because it might be worried that the information is liable. the incentive would be the generic recommendations, the generally popular material, not personalized, or to big business produced material which is likely to be safe and provide compensation for the platform, if there is a lawsuit. so the consequence is mainstream media would win and user generated content would lose, in that some creator is putting up things lots of people like and the platform has declined to recommend or would no longer be inclined to recommend it, once they are subject to liability. you could think that is good or bad depending on how you think about user generated content, but i think it would be a consequence. next side. -- slide. so, now i will turn to the preservation of constitution protected speech act. the thing that is not a surprise is it clearly authorizes state laws that protect against discrimination. those laws may be preempted by 230, which can be read as giving platforms the ability to block any material they find objection to. this modification would allow states, if they wanted to ban discrimination by platforms, to do so. there could be an interesting first amendment problem there, a hard question, but at least it would remove the section 230 obstacles to those kinds of laws that require platforms to treat all opinions equally. next slide. another thing about the statute, about the bill, is it would strip away immunity when the content provider utilizes an algorithm to -- to post content to a user, unless the user knowingly and willfully selects an algorithm. all suggestions stem from algorithms, they recommend the most popular things. recommends a random thing, that is an algorithm. so the real question is, what it would take for a platform to comply with the knowing selection requirement. if it is something like i agree this will be selected by an algorithm would be enough to comply, then in that case the bill would not do much harm, though i am not sure it would do good to require everybody to take the extra time to agree to the algorithm. on the other hand, if it requires an explanation, or an array of choices available to users, that could be a problem because computers cannot work without algorithms, so -- the recommendations a platform can supply, or litigation about what counts is knowingly willful selection. next. um, the third major feature of preserving constitutionally protected speech act is it would require an appeals process and transparency. there is a lot to be said for the value of transparency requirement, even imposed and big businesses when the platforms are essential to american political life. at the same time, it depends on how transparent it has to be. the requirement requires that the company clearly state why content was removed. what if they say it is hateful. why are you saying that? we said hateful, is that clear enough? what about pornography. it is not pornography, it is our. is that clear enough? clearly stating what counts as a reasonable or user-friendly appeals process, not like defining the phrase of appeals or user-friendly, that is not a legal term. next. the safe tech act. i want to see a few things about this. there is no immunity under the act, except in payment to make speech available, or -- of the speech. that means paid hosting services would be stripped of immunity. so, the hosting services like blogging software, those kinds of things, they would not be able to charge, or else they would be liable. free services advertising support, i am not sure that that is a good thing to required, but that is what the law requires. it would also mean a company would be liable for anything posted by creators or funded by revenue. so, youtube shares advertising revenue with creators, and it would be liable in that situation. i'm not sure that is a good idea. it may not be intentional. chairman doyle: can you wrap up the testimony? you are a minute and a half over. mr. volokh: let me just close and i will be happy to answer questions later. chairman doyle: let's see. we want to recognize mr. lyons. mr. lyons: thank you to the members of the committee. my name is daniel lyons, i am a senior fellow at the american enterprise institute and a professor at boston college law school, where i write about internet policy. i want to focus on two themes. section 230 provides critical infrastructure in the online ecosystem. we tinker with it at our peril. secondly, algorithms risk doing more harm than good for inter-based -- internet-based companies and for users, while unleashing litigation related to the issue that the subcommittee six to address. one cannot emphasize enough the importance of section 230 to the modern internet landscape. for accurately describing the statue as the 26 words that created the internet. the hearing is focused primarily upon the larger social media platforms, such as facebook, but it is important to recognize a wide range of companies rely heavily on section 230 every day to acquire and share user content to millions of americans. section 230 provides the legal framework that allows platforms to facilitate users speech at mass scale, and promotes competition among the platforms. it relieves startups from the costs of content moderation, which reduces barriers to entry online. because section 230 is wound into the fabric of online society, it's difficult to predict in advance how changing the statute will ripple throughout the ecosystem. one thing we know is that the ecosystem is complex and dynamic, which creates a greater risk of unintended consequences. professor eric goldman argues by reducing section 230 protections makes it harder for disruptive new entrance to challenge prominent companies. costs would rise for everybody, but the incumbent can afford that more easily than a startup. it would be ironic if seeking to reduce facebook's influence, this committee inadvertently protected them against competition. congress's previous mmi highlights of the risk of unintended consequences. in 2017, they eliminated intermediated liability for sex trafficking. the purpose was noble, to reduce online sex trafficking, but good intentions do not justify bad consequences. subsequent studies by academic and by the gao, show that fosse made it harder, not easier for law-enforcement detect perpetrators, made conditions more dangerous for sex workers, and had a chilling effect on free speech. the bills before the committee present similar risks, this is true of attempts to regulate platform algorithms. we have heard a lot about how algorithms can promote socially undesirable content, but we must recognize that they also promote millions of socially beneficial connections every day. yes, they make it easier for neo-nazis to find each other, but it also makes it easier for other minorities to find each other, like lgbtq, social activists, or bluegrass musicians. they benefit from the company's use of algorithms to organize and current user generated content. it would be a mistake to eliminate those benefits because of the risk of abuse. the genius of the internet has been the reduction of information costs. one click allows a user to access a vast treasure trove of information, transported around the planet at the speed of light for nearly zero cost. the downside is filtering the cost, users must sort through the treasure trove to find what they want, and it differs from user to user. internet companies compete fiercely to help users sort the information and they do so through algorithms. descent devising them to reduce those services, in part because of the word vaguely. jama defines personalized algorithms as using any information specific to an individual, that is a broad phrase. if any algorithmic recommendation contributes to physical or severe emotional injury, also vague terms, the platform is stripped of its protections. so the incentives for the platforms are clear, whatever social gains we reap test. test. test. test. test. test. test. test. test. test. test. test. test. test. test. test. test. test. test. test. test. prompt litigation only related tangential to the purpose. i teach my students to identify ambiguous terms, because those are the terms that are most likely to prompt litigation. here, terms like process, materially contributes and severe and emotional injury are catnip to creative trial lawyers, particular in a dynamic environment where innovation creates new opportunities for litigation. that was the lesson of the dcpa, the anti-robo call statute that found new life in 2010 to target conduct that to congress neither intended or contemplated. these ultimately fail, but they still proposed legal costs and of the costs disproportionately affect startups. thank you. chairman doyle: we have concluded our second panel's opening statements. we will move to members' second round of questions. each member has five minutes. i will start by recognizing myself for five minutes. ms. goldberg, thank you for being here and taking up the fight for individuals who have suffered tragic and unimaginable harm. it is important work. i have often heard that by amending section 230, congress would unleash an avalanche of lawsuits upon companies, which would break the internet and leave only the largest platforms standing. can you tell me your thoughts on the matter and go into greater detail on the hurdles that users would still have to overcome, would still have to overcome to bring a successful suit against a platform? ms. goldberg: there's so much concern about the idea that if we remove section 230, that it will flood the courts and litigants will stampede in. to that i say, what about the frivolous section 230 defenses we see? there is a case against a facebook for discrimination where they claim that mark zuckerberg is immune from liability for lies said to congress, orally and in person. let me tell you why section 230 is not going to create grounds for suing. it is unlawful already to file frivolous litigation. it is sanctionable and it is a violation of the rules of professional response ability. two, the onus is on the plaintiff to prove liability. people say that removing section 230 creates liability. no, the pleading standards are very high and hard, and removing an exemption does not create liability, that is still the hard work of the plaintiff. three, basic economics deter low injury cases from going forward. litigation is arduous, expensive and requires stamina for years. it takes thousands of hours of attorney time. and these are personal injury cases that are contingencies. the costs of experts, depositions, those at up and few lawyers will take cases where the costs of litigation are increment with damages. that leads to the most serious cases being litigated. four, nothing will be procedurally different without section 230. motions to dismiss on other grounds are filed by defendants at the same time, the statute of limitations, um, poor pleadings. and five, anti-slapp. it's a faster and harsher deterrent for defendants to get constitutionally protected speech based claims dismissed. plaintiffs bringing frivolous cases are deterred by anti-slap, which shifts the sea, so that if a defendant brings an anti--slapp motion, then a plaintiff that loses has to actually pay the defendant's legal fees. so it is very expensive to bring a speech based claim, punitive. six, uninformed plaintiffs sue anyway. section 230 doesn't deter people from filing lawsuits -- there is no barrier to getting an index number and filing a lawsuit. chairman doyle: thank you. it's good to have you back, by the way, matt. your organization is committed to ensuring all communities have a voice online and can connect and communicate across technologies. we have been told by large tech platforms and others that in changing section 230, we must create exemptions for smaller online platforms, but you do not think that is true. we have a fairly small exemption of that type in the justice against malicious algorithms act. can you explain your view on small business exemptions, generally? mr. wood: you are right, we do not think it is the way to go. ms. goldberg's answer is amazing and it shows the balances we have to strike. a small business exemption could prevent an increase in litigation and even strategic lawsuits against litigation, so there is danger there. we also think that big platforms can generate beneficial engagement, and small ones can cause harm, so that is why we would be careful about only attaching liability to the largest platforms and making sure that smaller ones cannot be held accountable. chairman doyle: ok, thank you. my time is up. i want to recognize the ranking member for five minutes. >> thank you very much for the panel. mr. volokh, twitter has a new ceo and his earliest statements indicate he is not a fan of the first amendment. twitter has expanded the scope of their private information policy to prohibit the sharing of private media, such as images or videos, without their consent. this is abuse of power by these companies to show that they are being arbiters of truth, however twitter goes on to say that they will take into consideration whether the images are publicly available and or is being covered by journalists, or being shared in the public interest, o isr relevant to the community. i understand that twitter is protected under section 230 for this type of action, how where the action be interpreted under the first amendment if it was the government taking this action? you need to unmute. mr. volokh: i'm sorry. under the first amendment, the government could not --. if it was a newspaper doing it, it could do that. and newspapers do that kind of thing. the congress should consider it more like a newspaper, or more like the post office, which is government run, or ups or fedex. we do not expect them to decide, oh, there is bad things being done, so we are going to shut off a phone service. we do not expect ups to say we will not deliver books from this publisher because we think that they are bad. they are only the carrier. the question is, is twitter letting people contribute to others on the feed, should the law view twitter more like a phone company, more like a post office or more like ups, or more like a magazine, which is supposed to be making editorial judgments. that is the question. >> mr. wood, you talked about how a ruling opened the door to providing platforms protections for material that a platform deems as unlawful because of the subsequent description of that material was viewed by the court as republication of that material. there seems to be an area of general agreement between scholars on the ends of the political spectrum from --. as part of the big tech accountability platform, that we create a bad carve out that would --protection for platforms that facilitate illegal activity. how with this proposal help hold tech companies responsible for illegal activity on their platforms? mr. wood: as you noted, we talked about people on opposite sides of the political spectrum who have taken that view and we have explained that gesture bidders could be liable. there is some appeal to thinking of every time a website serves as a publication, that is not the only way to think about it, so the algorithms use other techniques, it could be seen as separate from the original liability exams and they could be held liable for. that is what we are talking about, how could we do it, whether it is a majority or a minority, or our proposal -- there should be ways or whether we call them bad samaritans or not, hold companies accountable when they know that their choices are causing harm. >> dr. franks, in your testimony you seemed to agree with this assessment, but in your suggestion you said you had a second concept of indifference to the bad samaritan platform, would you elaborate? dr. franks: my apologies. yes, that the deliberate indifference standard is intended to set a bar for how intermediaries would need to respond to certain types of unlawful content, that they would not -- the shield is simply because there was content, that's assuming that they knew about the content and refused to take steps to prevent it. >> my time is about to expire. i yield back. chairman doyle: glitzy, -- let's see. you are now recognized. >> i appreciate the thoughtful way we are approaching reform, which can clearly have wide ranging effects across the internet ecosystem. mr. wood, what specific reforms can we make to section 230 that will ensure -- ensure platforms are not patting their bottom lines or knowingly harming vulnerable populations? mr. wood: we have not endorsed any of the approaches, but we think are good ideas and all of them. the wide-ranging impacts he discussed. finding a way to hold these platforms accountable when they know they are causing harm, whether by examining that interpretation of the protections in c understanding that distribution and amplification1 are different. there are other approaches, but we are all looking at the same problems and talking about how to address them, not whether we should. >> that is a question, how do we address this complicated topic? appreciate your thoughts. thank you for coming today again. i know you have thought a lot about how it might be appropriate to carve out some type of algorithm for legal immunity -- some types of algorithm for legal immunity under section 230. what do you think about the general product design, features as was suggested in one report? >> in general, one thing that's been helpful at facebook, as we heard from the first panel, is that attention has been shifted from debating the content and the right to post and who gets to decide if it gets taken down and looking upstream at the practices of the platform, the design, before something goes viral, that make it go viral or pushes it into someone's newsfeed or child's instagram feed. one of the studies that come out as a result of her work is that facebook employees themselves admit the mechanics of the platform are not neutral and are in fact spreading hate and misinformation, so the bill the chair has introduced and the other bill, those especially, by focusing on either nontransparent algorithms or knowing and reckless use of algorithms that then result in extraordinary harm, whether it is international terrorism or serious physical or emotional harm, that that narrow carve out where it has caused bad stuff seems to get at some of the most egregious issues without wax -- issues. >>. -- pushes misinformation. >> thank you. dr. franks, how does the status quo of section 230 allow hate to proliferate without any accountability to the harmed public? >> disinformation is one of the key issues we are worried about in terms of the amplification and distribution of harmful content, fraudulent and otherwise. one provision of section 230 safeguards the intermediaries promoting this content from any kind of liability, so there's no incentive for companies to think hard about whether the content they are promoting will cause harm, so they have no incentive to review it, think about taking it down or think about whether it should be on the platform at all. >> thank you. ms. goldberg, even if we reform section 230, as you have mentioned, there are other barriers to plaintiffs court cases. how can we ensure plaintiffs have access to the information they need to properly plead their case? >> well, we create the exceptions and exemptions of immunity so plaintiffs can get to the point of discovery, where the defendant is compelled and required to turn over information that's relevant to the case so that a plaintiff has a shot at building a viable lawsuit. >> thank you. mr. chairman, i yield back. >> the gentleman yields back. the chair recognizes mr. guthrie. >> thank you and i appreciate the witnesses being near. i know it has been a long day and i appreciate you being here. i have a concern. as i said earlier today, when i talked to the republican leader, a real concern about opioid addiction, opioid sales, enabling the opioid trade, and the sale of opioids on social media platforms has skyrocketed. in many cases, law enforcement shares information or leads with platforms to take down this content, but those calls sometimes go unheeded. so for you or anyone who has time to answer this, my question is first to you. can you explain which sections of section 230 provide immunity for platforms when they know a specific can you explain which provisions of section 230 provide immunity for platforms when they know of specific instances where this content is on their platform, illegal opioids, and yet do not take action to remove it? would you recommend modifying 230 to address? how would you balance the need for accountability while fostering platform's ability? you want to -- >> sure. i do not think section 230 needs to be modified in light of this. if we were talking about federal criminal law enforcement, generally speaking, section 230 does not pre-empt criminal federal prosecution. they are actively involved in this or aiding under the proper legal standards, they can prosecute them. section 230 would preclude civil liability lawsuits against platforms. i'm not sure there would be that much by way of possible civil liability for platforms, even if they are alerted there's something going on in this particular online group. i'm not sure we want platforms to be held liable for it. to the extent people are engaged in this illegal activity on platforms, that's helpful to law enforcement to have it be done in a place where they can hop on and look around and see the ads and use them as a basis for persecution. platforms certainly are not barred from alerting law enforcement to such things. they certainly are obligated, in fact, to respond to law enforcement subpoenas if law enforcement wants to subpoena things. the right approach is not to enforce platforms as opioid cops. instead, to have law enforcement use the information they can find on the platforms to prosecute illegal transactions. >> thanks. thank you for that answer. miss goldberg, you have an answer? >> thank you for having many issue on your mind. we represent four families who lost a child. one as young as 14 who bought one fentanyl-laced opioid pill during the pandemic. kid is home from college, bored, experimenting. you asked where in section 230 precludes us -- where in section 230 precludes us from being able to hold a platform responsible for facilitating these kinds of sales. the fact is that if we looked at section 230 as it is written, i think we could agree that the matching and pairing is not information content, it's not speech-based thing that a user posted. however, the way that the courts have interpreted section 230 over the last 27 years is more of a problem than how it is currently drafted. it's so extravagantly interpreted that it included all product liability cases. you can't sue anything that's related to the product design or the defects. you can't even sue a company for violating its own terms of service. >> i have 40 seconds. how would you change it? >> i think one provision in the safe tech act is that it has a carveout for wrongful death. if we have the most serious harms overcome section 230 -- or remove section 230 for the most extreme harms, that's how we do it. >> i have 15 more seconds. >> i will sing. >> okay. i will yield back my time. thanks for your answer. thank you. >> chair recognizes miss clark for five minutes. >> thank you, mr. chairman. thank you to our panel witnesses for your testimony here today and for your patience as we came back from voting this afternoon. many today have stated section 230 has served its intended purpose of allowing a free and open internet the opportunity to blossom and connect us in ways previously thought unimaginable interpretation and overly broad interpretation of the law from federal court it has aided in the promotion of a culture in big tech that lacks accountability. respect for free speech in the real world and online is of paramount importance. we can all acknowledge that important role section 230 plays in creating the condition for free speech to flourish online. unfortunately, many companies have used this protection as a shield for discriminatory or harmful practices, particularly with respect to targeted online advertising. that is why i was proud to introduce hr-3184, civil rights marginalization act that ensure civil rights laws are not sidestepped. section 230 provides exemptions to the liability shield in federal criminal prosecutions, intellectual property dispute and prosecutions related to sex trafficking. as targeted advertising can be used to exclude people from voting, housing, job opportunities, education and other beneficial economic activity on the basis of race, sex, age and other protected status, now is the time to codify and modernize our civil rights to ensure our most vulnerable are not left behind in the digital age. my first question is for mr. wood before giving other panelists the opportunity to chime in as well. mr. wood, in your prepared testimony you made clear your belief that a complete repeal or drastic weakening of section 230 would not sufficiently address the harms that we have been discussing today. why do you feel that a more targeted approach is the better option? >> thank you, representative clark. that's our belief. i think it speaks to the harms you are talking about here. if we were to repeal section 230, then that would still beg the question what are people going to sue for? if there's no remedy underneath that repeal, even though we have taken away the liability shield, there could still be no relief for the plaintiff who has been harmed. we have a lot of support and sympathy for the ideas in your bill. civil rights and getting civil rights back into the equation, making sure platforms can't evade civil rights law is key. the only questions about the approach and how to do that is whether we ought to say only targeted ads should trigger that change in the shield. perhaps there are ways -- there are ways in which platforms could discriminate that don't involve targeted advertising. we would like to look more and say, when are they knowingly contributing to or distributing material or engaging in some conduct that discriminates and making sure we can address those issues when and are they arise, whatever the background for that harm. >> thank you. dr. franks, in your testimony you spoke about collective responsibility. could you expound on that idea? >> yes. the concept of collective responsibility is something we are familiar with in normal time in our physical spaces. the reasons that cause -- the things that cause harm often have multiple causes. we know there are people who act intentionally to cause harm. but there are people who are simply careless, people who are sometimes reckless. there are people who are sometimes simply not properly incentivized to be careful. it tells us those parties do have some responsibility to be careful. when people are negligent or when they are reckless or where they contribute in some minor or major way to harm, they can and they should be found responsible. what that does for all of us is it encourages people to be more careful. it encourages businesses not to simply seek to maximize their profits but to consider the ways they might allocate their resources to think about safety, to think about innovation, to think about absorbing to some extent the cost of any harm that might result from their practices. >> thank you very much. i thank all of our witnesses for appearing before us today. with that, mr. chairman, i yield back the balance of my time. >> chair now recognizes mrs. rogers for five minutes for her questions. >> thank you, mr. chairman. i wanted to ask about a provision in the legislation that i have been working on related to section 230 which would remove liability protections for platforms that take down content that's constitutionally protected. it requires companies to have an appeal process and be transparent for their content enforcement decisions. would you speak to how you believe this approach to amending section 230 would impact speech online? >> it's complicated. i don't know the answer to that fully. here is the up side. here is the advantage. by modifying section 230 to strip away platform's immunity for political censorship or religion based or scientific claims, and don't need to block things that are protected or lewd or violent, that would make it possible for states to step in and pass laws requiring non-discrimination. that's a good thing. in fact, i think there's a lot to be said for that. the platforms are tremendously powerful, wealthy entities. one could argue that they shouldn't be able to leverage that kind of economic power and political power. that we shouldn't have all these very wealthy corporations deciding what people can and cannot say online politically. on the other hand, there could be down sides. there would be more litigation. some probably funded by public advocacy groups where people say, well, my item was deleted because of its politics. the platform says, no, it was pornographic. i think the real reason was politics. there might be a good deal of extra litigation to this and maybe extra -- if you think it's good for platforms to remove death threats, they would be allowed to do that but there would be the extra possibility of litigation if they remove it, somebody would say, that wasn't threatening, now i will sue you. it's pluses and minuses. >> okay. thank you. i wanted to ask a follow-up related to the legislation the justice against malicious algorithms acts, which would allow section 230 to allow narrow protection for those that cause severe emotional injury. would you speak to how you believe that would impact free speech on platforms? do you think it would silence individual american voices? >> yes, i think it would. because platforms would realize that recommending things, using an algorithm -- everything is an algorithm. any personalized recommendation is dangerous. it's dangerous because of the possibility there would be a defamation. it causes severe emotional injury. they may worry about that. they can't tell what's libelous and what's not. they know what's risky. what's risky is personalized recommendation of content by unknown users. instead what platforms will say they won't recommend anything -- that's bad for business. recommendations keep people on the system. instead, they will provide generic recommendations instead of recommending a video it thinks you might like, it will recommend videos that most people like, which is not as much fun. it's safer for the platform. or they recommend professional content, mainstream media content where there's less risk of possible injury stemming from that, less risk of defamation. they could also make sure that the professional company's indemnify them against liability because those companies have deep pockets. that's good for big business, good for big media. not so bad for platforms. not good for user generated content which will no longer be recommended even if it's perfectly fine. >> thank you. if the internet user felt a political opinion they disagreed with caused severe emotional harm, could the user sue the platform under this bill? >> they certainly could. it remains to be seen whether the court would recognize that. severe emotional harm is not defined in a way that would exclude that. the wise platform policy would be to not offer algorithms so you don't run the risk of as a result of using a personalized algorithm you inadvertently suggest content that's going to trigger liability. >> thank you all for being here. i yield back. >> chair now recognizes mr. mckeechan for five minutes. >> thank you, mr. chairman. i merge colleagues to take the view that when we are talking about immunity, what we are talking about is not trusting our constituents. they're the ones who make up juries. what we are saying is that they can't get right with proper instruction and proper trial put in front of them, they can't get the answer right. they are wise enough to lect us, wise enough to deal with issues of death in terms of criminal liability or freedom in terms of criminal liability. but we can't trust them to deal with a few dollars and cents when it comes to big tech and these immunities. that to me seems to be incongruent. i trust my constituents. i think they are capable on deciding these issues. that being said, miss goldberg, you have put together what i call -- what you call appendix a. i assume you believe that to be a good piece of model legislation for what we're trying to do? >> i think i misunderstood what you said. >> i think when i looked at your testimony, you have what you call appendix a, which seems to be a bill. i think you are suggesting that might be a model for going forward with 230 relief? >> yes. thank you. i very much -- >> let me just -- hold on, miss goldberg. i want to make sure i understood the purpose of that appendix. i want to ask you, what is the difference -- any substantive differences between your model bill and the safe act? >> it's very much inspired by parts of safe tech. there are a few additional carveouts in the bill that i propose. namely that -- >> would you just say what those are? >> sure. i feel there needs to be a carveout -- injunktive relief and court ordered conduct. there needs to be -- i'm trying to think. a blanketing exemption for product liability claims, which i don't see in safe tech currently. i also don't see anything that carves out child sexual abuse and child sexual exploitation, which in my opinion along with the wrongful death claims that you do have, those are the types of claims that are the most serious and need specific carveouts. >> okay. i appreciate that. we will certainly look at those things. i would suggest to you if you look at the bill again, you might be looking at an old one, injunctive relief is in the safe tech act. the gentleman from the free press action, would you tell me your name again? >> certainly. matt wood. >> okay, it is mr. wood. i thought i heard another name said. you seem to believe the safe act would adversely affect free speech. am i understanding that correctly in your testimony? >> i wouldn't say adversely affect free speech. i think it would tend to lower the shield wrongly in some cases and aimed at remedying a lot of harms that are very important. but we have concerns about the kien kinds of civil procedure that miss goldberg was speaking to earlier. >> let me ask you this. if you look at the carveouts we have there, i'm subject to liability potentially under some of those depending what i'm doing, you are subject to liability. doesn't mean you will lose the case but you are subject to liability. i don't hear those carveouts -- those topics being suggested that my free speech or your free speech is being limited in any way. how is it that it's limited when we apply it to the big tech arena? >> again, i would say we're not -- i'm not saying it's limiting free speech. what i'm saying is that when you have, for instance, the lowering of the shield upon the receipt of any request for injunctive relief -- >> let me ask you this question. if you and i can be subject to these things, why can't big tech be subject to them? >> they can be. is that a better state of the world? these platforms do -- >> why is it not a better state of the world? >> these platforms -- >> why is it not good enough for big tech? >> they provide benefits for people to benefit. they should be held liable when going beyond that. some of those could be -- we would suggest not having an automatic trigger that takes away the liability shield that has benefits but can cause great harm when abused. >> the gentleman's time is expired. >> i apologize for trespassing. i yield. >> that's all right. chair recognizes mr. wahlberg for five minutes. >> i want to get your thoughts on my discussion draft that would establish a carveout from section 230, protections for actions based on a claim relating to reasonably foreseeable cyber bullying of users under the age of 18. in my draft, cyber bullying is defined as intentionally engaging in a course of conduct that was reasonably foreseeable in places and individual and reasonable fear of death or serious body injury and causes, attempts to cause or would reasonably be expected to cause an individual to commit suicide. this would mean that an interactive computer service would need to know of a pattern of abuse on its platform. do you think that narrowly opening up liability would lead to changes by tech companies that reduce cyber bullying online? >> i think it will lead to some changes on the part of platforms. i'm not sure that it would be good changes. the problem is whenever you list -- this is what i call the reverse spiderman principle. with great responsibility comes great power. if you put platforms in a position where they are liable for not taking down cyber bullying, they are going to have to become policemen of this thing. somebody says, this person is saying all of these things and they put me in fear of serious bodily injury. the person who is posting says, no, no, no, you are misunderstanding. this is legitimate criticism. there's debate about some event that happened at school. i will give you an example. there have been incidents where a young woman accuses a boy of, say, raping her. the boy says, that's cyber bullying of me or that's bullying of me because it's all a lie. this is putting me in fear of violence from third parties. it may also lead me to feel suicidal or something like that. do we want platforms to be in a position there where they are deciding who is telling the truth and who isn't and whether in fact this is indeed the sort of material that should be taken down? i don't think that that's something that should be left to platforms. schools may have authority to investigate this. to deal with it in some situations. law enforcement may in some situations if it's death threats. i don't think platforms that don't have subpoena power, don't have real investigative power should be made internet bullying cops. >> thank you. appreciate that. mr. wood, in the case of cyber bullying online, while cyber bullying may not be illegal, many times it can rise to that level which may present a cause of action such as harassment claims. in those instances, do you think my section 230 discussion draft carveout for cyber bullying would provide a pathway for parents and children to seek relief? >> yes, thank you, mr. wahlberg. we tend not to favor carveouts. we heard about the harms these activities cause when platforms facilitate them. rather than tieing liability exemption, we would take a more comprehensive if less targeted approach that says any time the platform is facilitating harm or its own conduct is causing that harm, then they should be liable for damages and not necessarily solely for the initial user post. obviously, that's a spectrum. we think court should look at that and not be precluded from examiing it. >> thank you. appreciate that. i took more than my time in the first panel, so i give this back to you. >> that's very generous of you, mr. wahlberg. i appreciate that. mr. soto, you are recognized for five minutes. >> thank you, mr. chairman. i thank you and the ranking member and my colleagues for a spirited debate in panel one. i want to focus on common ground that i have gathered after hearing so many of our colleagues from both sides on the aisle from exemptions to 230. the main frustration is there are many things in the real world would have consequences. but when you do it virtually, you are exempt, whether it's criminal activity, whether it's violating civil rights, whether it's even injuring our kids. many of these things if you did them in real life as a newspaper or radio station or as a business, you would be liable for. you are not magically because it's in the virtual world and because it happens -- because of 230. i want to focus on those areas of common ground i saw. protecting civil rights, stopping illegal transactions and conduct and protecting our kids. start with you, attorney goldberg. we have hr-3184 which attempts to remedy civil rights violations. i want to get your opinion on the importance of injunctions in these civil rights violations whether they are ongoing for a victim. your thoughts on damages. >> i think injunctive relief is important. the current standard is you can't enforce an injunction against a tech company because of section 230, but you can't include them as a defendant because of section 230. take my client, for example. she was the victim of extreme cyber stalking. her ex-boyfriend impersonated her and made bomb threats all around the country to jewish community centers. he was charged with 60 months in federal prison. a lot of the threats he was making were on twitter. he smuggled a phone into prison, got in trouble for it, got re-sentenced. twitter won't take that content down, even though it's -- it was the basis of his sentence and really very much related to why he was in trouble in the first place. i can't get an injunction against them. but i can't not -- if i try to get a defamation order, i can't enforce it because twitter would say their due process is violated. >> thank you. time is of the essence. there's nothing you can do about them without an ability to have injunctions. another is protecting our kids. ambassador, you discussed a little bit in your testimony. where is the line? how do we protect kids under 18 online in the social media sites according to your opinions? >> i think what we see is, again, the platform design, as miss goldberg discussed but also as we have seen in facebook papers, the platform design connects people that can harm children and promotes content into their feeds that can harm children. i think as you look at remedies, figuring out how you can hold the platforms responsible without creating negative affects mr. wood has described narrowly targeting their design and the serious, serious harms, either physical or if there's a way to put the emotional harms in a way that doesn't become too broad. it's essential. we hear from children all the time, i wish the platform would wipe my algorithm clean. they are sending me stuff that's making me worse. we hear this epidemic of mental health issues, especially among young girls. they go back on and back on and back on. that's where their social life is. they are fed these damaging self-images that hurt them. >> thank you. any other situation, a commercial entity would be liable for putting our kids if danger like that. dr. franks, welcome from the sunshine state. i want to talk about stopping illegal conduct and transactions beyond just civil rights arena and want your advice on what we could pursue to stop illegal transactions like drug deals and things like that among other illegal conduct. >> part of the challenge of this and part of the reason why i am somewhat hesitant to endorse approaches that take a piecemeal approach is because of what you are pointing out, which is that there are numerous categories of harmful behavior. these are just the ones we know about today. the ones that are going to happen in the future are going to be different. they're hard to anticipate. this is why theeffective way of reforming 230 is to focus on the problem of the perverse incentive structure. we need to ensure that this industry like any other industry has got to think about the possibility of being held accountable for harm. whether that is illegal conduct, whether that is harassment, whether that is bullying, they need to plan their resources and allocate their resources and think about their products along those lines before they ever reach the public. they need to be afraid that they will be held accountable for the harms that they may contribute to. >> your time has expired. >> thank you. i yield back. >> chair recognizes miss rice for five minutes. >> thank you, mr. chair. i think it's important for us to remember that the last time both houses of congress agreed to change internet liability laws was in 2018 when congress passed and the president signed the stop enabling sex trafficking act. even though not much time has passed since then, i believe our understanding of how online platforms operate and how they are designed has evolved with the conversation about section 230, liability protection, in recent years. miss goldberg, as an attorney who specializes in cases dealing with revenge porn and other online abuse, you can discuss whether and how it has impacted your cases? >> sure. you know, as a basic, it has come to be a bit problematic in my practice area, because it conflates child sex trafficking with consensual sex work. i did plead it recently in a case i told you about, which basically says that omegle did facilitate sex trafficking on its platform when it matched my 11-year-old client with a 37-year-old man who then forced her into sexual servitude. they claim they are free from liability. right now, it's the best hope we have when it comes to child sexual predators on these platforms. >> if you could talk more about the -- now about the concerns that have been raised by many people about the impact on sex workers. you mentioned that before. it's my understanding that it amends section 230 for state suits and some civil restitution suits dealing with sex trafficking and prostitution. it created federal criminal liability for websites that facilitate it. how does the inclusion of criminal liability affect how it operates? >> my understanding is that there's been one case that doj has brought against a platform. platforms lose their immunity for state prostitution laws in addition to federal. i think it does create a compelling scenario when you could have a state prosecutor arrest mark zuckerberg for promoting sex trafficking on facebook. i think it really hasn't played out that much. it certainly created a lot of concern for sex workers who feel that their lives are in danger by having to go back out on the streets. >> right. thank you all so much for your time today. mr. chairman, i yield back the balance of my time. >> chair recognizes miss eshu for five minutes. >> thank you, mr. chairman. thank you to the witnesses on this, the second panel. this may be one of the longest hearings that the chairman has overseen, and appreciate your patience, because it's a long day for you as well. to the ambassador, in your testimony -- i ask this because you are a veteran of house intelligence committee. in your testimony you discuss the national security risk associated with inaction on clarifying section 230. you especially mention how terrorists use online platforms. it was chilling. can you tell us more and rather briefly how terrorists use social media platforms? >> am i on now? thank you for your leadership on these issues. quickly, the families of victims of terrorist attacks by hamas, a u.s. designated foreign terrorist organization, argue facebook allowed hamas to post content that encouraged terrorist attacks in israel despite the fact that facebook's own terms and policies barred use by them. the attackers saw the content because facebook's algorithms directed them into the personalized news feeds of the individuals who harmed the plaintiffs. they allege that hamas used facebook to celebrate the attacks and to support further violence against israel. when the u.s. court of appeals for the second circuit yield it shielded facebook, they urged congress to better calibrate where immunization is appropriate in light of congressional purposes. he added,in internet companies could leave dangerous content. it's a question for length lay -- legislators, not judges. >> this is really chilling. it seems to me as a non-lawyer, both in terms of testimony today, but also reading, i think, really a very well drawn memo on the part of the committee staff, that the courts are saying to congress, we need to do something about this. i said earlier today when the first panel -- i was a conferree on the '96 telecom act. we certainly did not write section 230 to allow any social media platform to be able to undertake the activities that you describe. thank you to you and your good work. for mr. wood, i really appreciate your thoughtful and nuanced testimony today. can you just further elaborate on your recommendation that congress should clarify the plain text of 230? you point to how the court's interpretation in zerin versus aol was overbroad. can you -- that's a case from 1997. that's a long time ago. how that created a precedent for how courts interpret section 230 today? i think in an overly broadway. but you can bring something -- you can clarify that for us? >> yes, that's right. for a non-lawyer you got it right, 1997. some plaintiffs have gotten over that hurdle in some product liability cases. some cases where snapchat was held responsible for a filter. zerin precluded liability any time there's user generation content in the offing. clarifying that would say there's a distension between publication where they are not liable but something else, some further knowledge, some further amplification or distribution, whether algorithm or not, so there could be relief for plaintiffs who see the company's conduct either aiding and abetting harm and the kinds of engagement they are profiting from but harming people in the process. >> thank you. mr. chairman, on the written testimony, we received that about an hour before the hearing began today. i don't know if the committee had it earlier and distributed it later or if it was just late. in order to take advantage of it, we need it the night before so that as we are preparing for the hearing we can read the testimony, which is what i do the night before. i don't know why or how this -- >> i don't have an answer for you. we will check that out. time has expired. chair recognizes mr. cardness for five minutes. >> thank you, mr. chairman. thank you for having this important hearing. earlier this year, i alongside with senator klobuchar sent letters to tech ceos raising the alarm over the rate of spanish and other non-english misinformation and disinformation across digital information and platforms and their lack of transparency regarding efforts to limit the spread of the harmful content for all languages, content that could and sometimes results in the loss of life. if platforms are not investing in combating spanish misinformation and other non-english information, spanish language moderation efforts in social media sites, including facebook, fail to tackle the accounts of viral disinformation content targeting hispanics. it promotes those with vaccine hoaxes and election misinformation. some results in the loss of life and certainly in other horrendous actions that happen on victims. mr. wood, what could be done to ensure the integrity of consistent and equitable enforcement of content moderation policies across all languages in which a platform operates, not just in english? >> yes, thank you for the question. thank you for calling attention to this issue. they have done work highlighting this grave disparity. 230 is central not only to the hearing but everything that platforms do. i don't know there's a 230 response to your question. we think the platforms when they have terms of service that prohibit content, however clear or good those are, people can debate, should enforce them equitable and not solely in english. the same kinds of disinformation they thought was harmful to take down in english. there are transparency obligations that they should be fulfilling. they are honoring their own terms of service. i don't see a 230 angle here, per se. obviously, 230 is central to everything. could they be held liable for failing to honor their terms of service and engaging in unfair and deceptive acts and practices by the ftc? i think the answer is yes. companies have and some have done this, not just contemplated it, tried to raise a 230 defense against ftc enforcement against that kind of unfair and deceptive application of their terms of service. there might be something to button up there in any 230 reform that moves forward. >> mr. wood, i would believe if we actually reapplied section 230 so that these massive, massive information organizations that are actually profiting from the proliferation of truths or lies, and it appears through the testimony we heard today is that lies tend to make them more money. negative discourse seems to make them more money. having people interact with each other on a negative basis actually gets them more money. the fact that section 230 they believe they can hide behind the non-liability, if we would exercise our responsibility to reset 230 to more clearly do so, that being the case, do you think that may offer a deterrent for them to stop ignoring their ability to do more, to protect people from harmful content? >> yes, i think it could. as we discussed in our view, when platforms know they are causing harm, that's different from publishing and hosting the content in the first instance. what you are pointing to is the fact that -- we are supportive of section 230. we think it's an important piece of law to retain. however, when platforms are described as having the time and energy and money to find out what people like and connect them to each other and look at that personal data and analyze it, when it makes them money, but they don't have the time and attention and energy to do that when it's causing harm, that's kind of hard to believe. that's why big companies like to wave the wand and say, we don't have -- this would be burdensome for us. it's beyond our capacity. yet they find time and ability do it when it adds to their bottom line. those are the questions we are not willing to accept. >> we had testimony earlier from a whistle-blower who clearly stated that facebook alone, just that one platform, is going to be talking about a profit of tens of billions of dollars. she clearly pointed out that with facts and information that she divulged through her whistle-blower actions that those profits do soar when they ignore life and what's best for the human interests of their viewers. anyway -- >> the gentleman's time is expired. >> i yield back. >> i thank the gentleman. chair recognizes miss kelly for five minutes. >> thank you, mr. chair. thank you all for testifying today and thank you all for your patience. dr. franks, in your testimony you state that, i quote, the dominant business model of websites and social media is based on advertising revenue. they have no natural incentive to discourage abusive or harmful conduct. one example i was particularly concerned about was a tiktok challenge at the school that was encouraging students to destroy school property and slap teachers. can you explain how a model that prioritizes advertising revenue, encouraging social media platforms and other websites to promote more harmful or abusive information? >> yes. thank you. it means we're not asking people to pay for a product. that is to say, people think they are getting something for free. the only way for this to be profitable for an industry is for them to sell you more and more ads that are more targeted. what that sets up in terms of the incentive structure is to maximize what is called engagement. what that means is we want people to live on these platforms. we want them to be addicted to these products. we want to learn as much about them as we can. that is the kind of incentive structure that section 230 is allowing to flourish, essentially without any kind of hindrance. if that's your model, you are not offering higher and higher quality. you are not telling people that the reason why they are paying for something is because you are giving them a better service. you are trying to keep them on that platform. unfortunately, because of human nature, the things that keep people addicted and keep them on a platform are things that are dangerous, provocative, extreme. that is the vicious cycle we find ourselves in. >> thank you. how does the use of personalized algorithms or other profit motivated design choices by some social media companies and other platforms amplify this problem? >> in a couple directions. we can think about particular kinds of vulnerabilities. if someone -- if an industry is well aware that the person that is using their platform is vulnerable to body images, if they are particularly vulnerable to suicidal thoughts, these are things that the algorithm can feed them more of. this is the way it's picking up on those tendencies and vulnerabilities. that is one way in which personalized algorithms can lead to harm. it's when they are looking for search terms and for resources and ideas about how they can distribute their harm. in that sense, based on what the individual him or herself is doing, that is something that they are putting into the system and getting back an incredible array and entryways and rabbit holes to more extreme versions of content and more and more ways to harm other people. >> thank you. ambassador, do you have anything you would like to add to this? >> yes. one of the things that i think is often said is the platforms have no incentive to cause these harms. it would be a pr hit. incentives run in the other direction. what i worry about is that the incentives run towards doing these harms. there's a regulatory arbitrage, that the platforms unlike other businesses don't have to abide by laws that this congress and past congresses have passed. it's true that broadcasters and newspapers knew if it bleeds, it leads. people will watch violence. they didn't fill their entire program with bloody murders because they felt they had some obligation to show other things. when the platforms don't follow those norms -- those weren't laws, they were just norms, they can get more eyeballs, more advertising dollars. but it's by breaking so many of the societally beneficial norms that we have. similarly with companies that operate on these platforms and that -- i talked to an international vaccine expert who said, i feel as though conspiracy theorists are using social media and i am fighting the engine of social media. >> thank you so much. mr. chair, i yield back. >> well, i want to thank our witnesses for their participation today, for your patience. for your excellent answers to our members' questions. it's going to be very helpful as we try to work together in a bipartisan way to get a bill that we can pass in the house and get passed in the senate and have the president sign. i know we still have a lot of work ahead of us. but we are committed to working with our colleagues in the republican party to put our heads together and come up with a good bill and vet it thoroughly and put it before the members. you have all been very helpful in that process. we appreciate it. testimony and lettering into the record. a letter from the national hispanic media coalition in support of hr-5596, the justice against malicious algorithms statement from preamble in support of hr-2154, protecting americans from dangerous algorithms act and hr-5596, letter from the coalition for a safer web in support of hr-5596, in addition to other pending committee legislation. letter from the anti-defamation league in support of reforming section 230 to hold platforms accountable. letter from the alliance to counter crime online and support of congress reforming section 230 of the communications decency act and adopting transparency provisions. letters from the victims of illicit drugs applauding energy and commerce committee for efforts to reform 230. letter from the leadership conference on civil and human rights expressing its views on the need for major tech companies to address rights on their platforms. revisions to 230 c 1 and c 2 from the alliance of countercrime on like, press release for the coalition of a safer web. facebook and google fund global misinformation. article titled facebook knows instagram is toxic for teen girls company documents show. article from the wall street journal, title facebook says its rules apply to all. a secret elite is exempt. article titled facebook tried to make its platform a healthier place and got angrier instead. opinion from "the new york times" titled what's one of the most dangerous toys for kids, the internet. article from "washington post" titled facebook's race blind practices around hate speech came at the expense of black users new documents show. opinion by bruce reed and james steyer in protocol titled why section 230 hitters kids and what to do about it. letter from the chamber of progress, statement by guy rosen, vp integrity meta titled update on our work to keep people informed and limit misinformation about covid-19. opinion by "the wall street journal" editorial board titled anthony fauci and the wuhan lab. remarks by then president trump, vice president pence and members of the coronavirus task force and finally a letter from the computer and communications industry association. without objection so ordered. i remind members that pursuant to committee rules they have ten business days to submit additional questions for the record to be answered by the witnesses who have appeared. i would ask each witness to respond promptly to any such questions you may receive. with that, the committee is adjourned. c-span is your unfulterred view of government. we're funded by these television companies and more including charter communications. >> broad bapd is a force for empowerment. empowering opportunity in communities big and small. charter is connecting us. >> charter communications sports c-span as a public service along with these other television providers giving you a front row seat to democracy. at least six presidents recorded conversations while in office. hear many of those conversations on c-span's new podcast "presidential recordings." >> season one focuses on lyndon johnson. you'll hear about the 1964 civil rights act, the presidential campaign, the gulf incident, the march on selma and the war in vietnam. not everyone knew they were being recorded. >> certainly, johnson's secretaries knew because they were tasked with transcribing many of those conversations. in fact, they were the ones who made sure that the conversations taped as johnson would signal through an open door office and theirs. >> you'll also hear some brunt talk. >> presidential recordings, find it on the c-span now mobile app or wherever you get podcasts. download c-span's new mobile app and stay up-to-date with the the biggest political events from live streams of the house and senate floor and key congressional hearings to white house events and supreme court oral arguments. even our live morning program washington journal where we hear your voices every day. c-span now has you covered. tloed the app for free today. sunday february 6th on in-depth, law professor will be our live guest to talk about race relations and inequality this america. hr many books include "the failures integration." join in the conversation with your phone calls.

Related Keywords

Whitehouse , District Of Columbia , United States , Wuhan , Hubei , China , Israel , Washington , America , American , Matt Wood , Eric Goldman , Daniel Lyons , Bruce Reed ,

© 2024 Vimarsana

comparemela.com © 2020. All Rights Reserved.