Transcripts For CSPAN Experts Testify On Big Tech Accountabi

Transcripts For CSPAN Experts Testify On Big Tech Accountability - PART 2 20240709



on holding big tech companies accountable for user-created content. legal experts and consumer advocates testify in this portion. chairman doyle: ok. we are ready to roll. ok. welcome. we are ready to introduce our witnesses for today's second panel. are we good? ms. carrie goldberg, owner of ca goldberg. mr. matthew wood, vice president of policy and general counsel free press action. mr. daniel lyons, professor and associate dean of academic affairs, boston college law school. a nonresident senior fellow of the american enterprise institute. mr. eugene volokh, gary t. schwartz distinguished professor of law, ucla school of law. the honorable karen kornbluh, director of digital innovation and democracy initiative. and senior fellow of the german marshall fund of the united states. and dr. mary anne franks, professor of law and michael r. klein distinguished scholar, chair at the university of miami school of law, president and legislative tech policy director, cyber civil rights initiative. welcome to all of you and thank you for being here. we look forward to your testimony. we will recognize each of you for five minutes to provide your opening statement. there is a lighting system in front of you. you will see lights. it will start initially green, turn yellow when you have a minute left. when it turns red, it is time to wrap up your testimony. so we will get started right , away. ms. goldberg, you are recognized for five minutes. ms. goldberg: good afternoon. chairman doyle each member of , this committee. my name is carrie goldberg. i stand for the belief that what is illegal offline should be illegal online. i founded my law firm to represent victims of catastrophic injuries. we sue on behalf of victims for stalking, sexual assault and child exploitation. in most of my cases, well over 1000 now, my clients injuries were facilitated by tech companies. and i have to tell you, the most miserable part of my job is telling people who come to me for help, who have suffered horrendous nightmares, that i cannot help them. congress passed a law in the mid-90's that takes away their right to justice. we cannot sue, because section 230 lets tech companies get away with it. back then, lawmakers said removing liability for moderating content would incentivize young tech platforms to be good samaritans and keep bad content materials out. -- and materials out. we know that that did not happen. i want to tell you three stories. she is 11 years old. he is 37. they both are on a site. the banner says, talk to strangers. omegle matches the two of them for a video chat. the man comforts her and her 11-year-old loneliness. at first he wants to see her smile. then he asks to see another body part. and another. and another. and she does protest. he tells her, you are free to stop. but i would have to share this material with the police, because you are breaking the law, you are committing child pornography. this crime against this child goes on for three years. he makes her perform for he and his friends on a regular basis. he forces her back onto omegle to recruit more kids. 10 days ago, we filed a lawsuit on behalf of this young girl. we argued that omegle is a defectively designed product, it knowingly pairs adults and children for video sex chats. omegle is going to tell us it was her fault, and that it has no duty to manage its platform because section 230 says it doesn't have to. a terrified young man enters my office. his ex-boyfriend is impersonating him on the hook up app, grinder. he has sent hundreds of strangers to my home and my job, he tells them i have rape fantasies and if i protest it is , part of the game. matthew says he has done everything. he has gotten an order of protection. he has reported the abuse to the police 10 times. he has flagged the profiles 50 times to grindr, and they have done nothing. so we get a restraining order against grindr, to ban this malicious user. grindr ignores it. the strangers keep coming, following matthew into the bathroom at work, waiting for him in the stairwell of his apartment building. over 1200 men come. in her order, throwing matthew's case out of court, the judge said grindr had a good faith and reasonable belief that it was under no obligation to search for and remove impersonating profiles. that good faith and reasonable belief comes from section 230, it is actually used to justify why they do not have to moderate content. exactly the opposite intention of what congress intended. so the men keep coming for another 10 months after we brought our case. as many as 23 times a day. and grindr knew the whole time. over the past six months, i have met with seven families, each whose child was killed because of purchasing one fentanyl laced pill. when i say catastrophic injuries, it is not hyperbole. the traps are set by internet platforms, which have profited beyond any summit of wealth and power in the history of the universe. now, i am not arguing to end the internet or any of these companies, or to limit free speech. the nightmares my clients face are not speech based. we must distinguish between hosting defamatory content versus enabling, profiting off of criminal conduct. and for hundreds of years, our civil courts are how everyday people have gotten justice against individuals and companies who have caused them injuries. it is the great equalizer and , that basic right is gone. we have a mess here that one congress created, but that this congress can fix. and i look forward to more questions and hopefully to talk about my ideas for reform. thank you. ms. goldberg: thank you. mr. wood. mr. wood: thank you for having me back. chairman doyle, i must thank my hometown congressman for your kind attention to my input over the years. since the last time i had the honor to appear before you. today's hearing proposes to what holding big tech accountable through what it describes as targeted reforms to section 230. that framing is understandable in light of testimony you just heard from others here about the harms the platforms allow or cause. free press action has not endorsed or opposed these bills, we see promising concepts in them, but because for concern, too. that is because section 230 is a foundational and fully necessary law. it benefits not just tech companies, but the hundreds of people who use their services and share ideas online. that is why congress must strike the right balance, reserving the powerful benefits of the law, but considering revisions to better align core outcomes with the statutes in plaintext. section 230 lowers barriers to people posting their own content ideas and expression without , needing the preclearance if platforms with demand if they could be liable for everything they say. this law protects platforms from being treated as publishers of other parties' information, into and permits platforms to make content moderation decisions while retaining that protection. section 230 encourages at the opening exchange of ideas, but takedowns of harmful material. without those protections, we risk losing moderation and use -- risk chilling expression, to. that risk is especially high for black and brown folks, lgbt people, immigrants, religious minorities, dissidents and all ideas that could be targeted for suppression by powerful people willing and able to sue just to silence statements they do not like. but as you have heard today, members of those same communities can suffer catastrophic harms online from platform conduct to. -- too. so it is not just in the courtroom that marginalized speakers must fear being silenced or harmed, it is in the chat room, too. in social media, comments sections and other interactive apps. repealing section 230 outright is a bad idea and it would not fix all these problems. we need privacy laws to protect against abusive data practices and other positive civil rights protections are applied to platforms. without 230, there may be torque remedies for criminal sections in some cases, and a few cases for underlining content, but no remedy for amplification of underlined speech is protected by the first amendment. while the first amendment is a check on claims of speech, and a constraint on speech towards defamation, those per se are not unconstitutional. 230's current text should allowed injured parties to hold platforms liable for their own conduct, and even for conduct that the platforms create when it is actionable. and courts have let some suits go forward for platforms posing there underscore mccoy questions, for layering content over user posts and encouraging them to drive at reckless speeds, or taking part in transactions in ways beyond letting third-party sellers merely post their wares. but most courts have read it far more broadly, starting in the zarin v. aol, which held the prohibition on publisher liability pretty -- precluded distributor liability, too. even once a platform has actual knowledge of the harmful character of material it distributes. people ranging from justice thomas to professor jeff cost agree this is not the only plausible reading of 230's plaintext. decisions like zarin prevent plaintiffs from liability for the platform's conduct, not just their decision to host others content. that is why we are interested in bills like representative banks's hr 2000 or the senate's bipartisan pact act, clarifying the meaning of the 230 present text by reversing zarin or allow suits from platform conduct, including continued distribution of harmful content, once the platforms have knowledge of the harm it causes. while bills like yours take aim at that same audible goal of deterring harmful implication, we are concerned to some degree about legislating the technology in this way. it could lead to hard questions about definitions and exemptions, rather than a focus on a provider's knowledge. and liability. we do not want to chill amplification that is beneficial, but we also do not want to prevent accountability when platforms actions cause harm, even in the absence of personal recommendations or outside of -- for important subjects like civil rights. the fact that a platform receives payment for publishing or promoting content could be highly relevant in determining its knowledge and culpability for any harm the distribution causes, but monetizing content or using algorithms should not automatically switch 230 off. unfortunately, the safe tech act tipped even further toward the chilling effects we fear by risking any broad change to 230, and risk of those protections any time platform receives payment at all, yet by dropping the liability shield the platform serves the request -- of relief. we look forward to continuing this conversation on these important ideas and your questions today, and the legislative process going forward. chairman doyle: ambassador kornbluh, you have the floor. hon. kornbluh: thank you for this opportunity to testify. thank you, chairman doyle and committee ranking member rogers, and committee members for the opportunity to testify. i will stress three points today. first, that the internet has changed dramatically since the rules of the internet were written. section 230, c1 must be clarified or we will lose protections and rights that we take for granted. three, it's long past time for regulations to be updated to limit harms and protect free expression. section 230 was prickly -- was critically important in allowing the internet to flourish. section 230, c2 remains essential to encouraging service providers to screen and filter dangerous third-party content. however, the internet is no longer the decentralized system of message boards it was when to 230 was enacted. social media companies differ in scale from 20th-century publishers, as we heard earlier facebook has more members than most major religions. more important, their design makes them an entirely different animal. they offer the most powerful advertising and organizing tools ever created. the use vast amounts of personal data to tailor the information users see. and they are not transparent to the public or users. and meanwhile, our economy, politics and society have moved online in ways never imaginable. facebook and google account for an astonishing half of advertising dollars and teenagers may spend an average of three to four hours a day on instagram. our elections occur largely online, beyond public view. significant harms -- fro the status quo arem evident from a few examples. a covid conspiracy film was shown more than 20 million times in only 12 hours before it was taken down by all major platforms. families of victims of terrorist attacks alleged terrorists used platforms to facilitate recruitment and commit terrorism. and the facebook papers show the deliberate use of algorithms to lead young girls the content promoting anorexia. unless section 230 is clarified, we will grow increasingly less safe and less free. broad application of section 230, c1 has precluded for more responsible behavior by large platforms. as revealed in the facebook papers, the company rejected employee ideas for changing design flaws that would've limited algorithmic harms. in addition, the outdated rules pose a national security risk on -- when foreign agents and terrorists can use the platform's tools to recruit and organize. that's why judges in terrorist cases, civil rights civil rights organizations and children's safety groups are asking congress to act. the bills under consideration by the committee would rightly peel back community when social media platforms promote the most egregious types of illegal content that produce harm. hr 5596, the jama act in particular, would incentivize platforms to reduce the risk of potential harms to children, victims of harassment and violence. hr 2154 would incentivize them to reduce the risk of international terrorist use their sites to organize. and the third point i would like to stress, regulations also have to be updated. it is not enough to have the liability. there is not always a plaintiff willing to sue, even when there is societal harm. companies lack guidance of what is expected of them, so regulatory agencies should provide clarity. bipartisan ads act would require the same transparency for online campaign ads, as required on broadcast tv. this should be extended to include know your customer provisions so that dark money groups are unmasked. the federal trade commission should require data to shed light on large platform practices. the equivalent of like a black box data recorder at the national transportation safety board gets when an airplane crashes. we do not have that kind of data after an election, for example. in 2016, the only reason we know what happened is because of the intelligence committee had the platforms to work over the data and we learned about voting -- and we learned about the targeting of african-americans. the point is, we should not need a whistleblower to access data. in addition, regulars could oversee platforms developing best practice frameworks for preventing illegal and tortious activity. and courts could refer to in deciding a company is negligent, as my colleague proposed. this effort could be made consistent with proposals in the eu draft digital services act. mr. chairman, it is essential to update rules, as the internet continues to change. otherwise, key protections our country takes for granted may become irrelevant. thank you. chairman doyle: mr. lyons, you are now recognized. you may need to unmute. you need to unmute, mr. lyons. move on? ok, we will go to mr. volokh. you are now recognized. then we will go back to mr. lyons. can you unmute, also, sir? it looks like your microphone is not connected, we are being told. [laughter] should we go to dr. franks? ok, we will go to dr. franks while our remote witnesses get their technical issues fixed. you are recognized for five minutes. dr. franks: you have heard mixed accounts of the nuances and complexities of the section 230 debate. and it is incredibly easy to get lost in them and let the perfect be the enemy of the good. you have heard in testimony that so much irreparable damage has been done already. because of the tech industry impunity, but congress has an opportunity right now to avoid future harm. and it is vitally important that we keep the future in mind as we are thinking through legislation and reform, because any solutions we have for today need to be able to address our current crises of disinformation, of exploitation, of discrimination, as well as being nimble enough to respond to the evolving changes and challenges of the future. at the most fundamental level, the problem with the tech industry is the lack of incentive to behave responsibly. preemptive immunization from liability provided by section 230 means the drive to create safer or healthier online products and services simply cannot compete with the drive for profits. as long as tech platforms are able to enjoy all the benefits of doing business without any of the burdens, they will continue to move fast and break things, and leave americans pick up the -- average americans to pick up the pieces. section 230, c1, the provision responsible for our current dystopian state of affairs, creates what economists call a moral hazard. when an entity is motivated to engage in increasingly risky conduct because it does not bear the cost of those risks. the devastating fallout of this moral hazard is all around us, an online ecosystem that is flooded with lies, extremism, racism, misogyny, fueling off-line harassment and violence. one of the reasons that the section 230 debate is so challenging is it is backwards. the question should not be, what justifies departing from the status quo of 230? the question should be, what ever allowed the status quo to exist in the first place? we should be demanding an explanation for the differential and preferential treatment of an industry that has a rate -- has wreaked havoc on so many lives, reputations and on democracy itself. everyone of us in this room right now would face a liability if we harmed other people, that is not only if you caused it directly or acted intentionally, we could also be held accountable if we contributed to the harm, and if we acted recklessly or negligently. that is also true for businesses. store owners can be sued for not mopping up spills, car manufacturers can face liability for engines that catch on fire, hospitals can be sued for botched operations, virtually every person and every industry faces the risk of liability if they engage in risky conduct that causes harm. that is good and it is right. because it avoids the creation of moral hazards. the possibility of liability forces people and industries to take care, to internalize risk and prevent foreseeable harm. there are those section 230 defenders who will say that the tech industry is different, it is not like these other industries because it is about speech, and speech is special and it deserves special rules. there are two important responses to this. one, the tech industry is not the only speech focused industry. speech is the core business of newspapers, radio stations, television companies, book publishers and distributors. speech is integral to many workplaces, schools and universities, yet all of these entities can be held liable when they cause or promote, and even in some cases when they fail to prevent harm. none of these industries enjoys anything like the blanket immunity granted to the tech industry. the potential for being held responsible for harm has not driven any of these industries into the ground or eradicated free expression in these enterprises. second, 230's immunity is a vote to protect far more than speech. people use the internet to do a variety of things. they do it to shop for dog leashes, they sell stolen goods, they pay their bills, renew their drivers license, the section 230 allows intermediaries to be immunized, not only for speech provided by others but for information. this has allowed tech platforms to absolve themselves of responsibility for virtually anything anybody does online, a protection that goes beyond anything the first amendment would or should protect. the current interpretation of section 230 immunity is an unjustifiable anomaly that flies in the face of legal and moral principles of collective responsibility. three changes are necessary to effectively address this. number section 230's legal one, protection should be limited to speech, not information. a recommendation that is reflected in the safe tech act. two, as many as the reform proposals before this suggested in some form, those protections should not extend to speech that an intermediary directly encourages or profits from. number three, section 230's protection should not be available to enter mediators that exhibit indifference to unlawful content. these are the essential steps necessary to change the perverse structure of the tech industry that exist today. thank you. chairman doyle: thank you very much. we have votes on the floor. we are going to check to see if -- what's that? we are going to check to see if we have any republicans on remote, because i am willing to stay and get the last two done. ok, we are going to take a recess and we will be back right after our votes. sorry about that.

Related Keywords

Whitehouse , District Of Columbia , United States , Washington , Americans , American , Michelle Singletary , Mary Anne Franks , Carrie Goldberg , Eugene Volokh , Matthew Wood , Daniel Lyons ,

© 2024 Vimarsana
Transcripts For CSPAN Experts Testify On Big Tech Accountability - PART 2 20240709 : Comparemela.com

Transcripts For CSPAN Experts Testify On Big Tech Accountability - PART 2 20240709

Card image cap



on holding big tech companies accountable for user-created content. legal experts and consumer advocates testify in this portion. chairman doyle: ok. we are ready to roll. ok. welcome. we are ready to introduce our witnesses for today's second panel. are we good? ms. carrie goldberg, owner of ca goldberg. mr. matthew wood, vice president of policy and general counsel free press action. mr. daniel lyons, professor and associate dean of academic affairs, boston college law school. a nonresident senior fellow of the american enterprise institute. mr. eugene volokh, gary t. schwartz distinguished professor of law, ucla school of law. the honorable karen kornbluh, director of digital innovation and democracy initiative. and senior fellow of the german marshall fund of the united states. and dr. mary anne franks, professor of law and michael r. klein distinguished scholar, chair at the university of miami school of law, president and legislative tech policy director, cyber civil rights initiative. welcome to all of you and thank you for being here. we look forward to your testimony. we will recognize each of you for five minutes to provide your opening statement. there is a lighting system in front of you. you will see lights. it will start initially green, turn yellow when you have a minute left. when it turns red, it is time to wrap up your testimony. so we will get started right , away. ms. goldberg, you are recognized for five minutes. ms. goldberg: good afternoon. chairman doyle each member of , this committee. my name is carrie goldberg. i stand for the belief that what is illegal offline should be illegal online. i founded my law firm to represent victims of catastrophic injuries. we sue on behalf of victims for stalking, sexual assault and child exploitation. in most of my cases, well over 1000 now, my clients injuries were facilitated by tech companies. and i have to tell you, the most miserable part of my job is telling people who come to me for help, who have suffered horrendous nightmares, that i cannot help them. congress passed a law in the mid-90's that takes away their right to justice. we cannot sue, because section 230 lets tech companies get away with it. back then, lawmakers said removing liability for moderating content would incentivize young tech platforms to be good samaritans and keep bad content materials out. -- and materials out. we know that that did not happen. i want to tell you three stories. she is 11 years old. he is 37. they both are on a site. the banner says, talk to strangers. omegle matches the two of them for a video chat. the man comforts her and her 11-year-old loneliness. at first he wants to see her smile. then he asks to see another body part. and another. and another. and she does protest. he tells her, you are free to stop. but i would have to share this material with the police, because you are breaking the law, you are committing child pornography. this crime against this child goes on for three years. he makes her perform for he and his friends on a regular basis. he forces her back onto omegle to recruit more kids. 10 days ago, we filed a lawsuit on behalf of this young girl. we argued that omegle is a defectively designed product, it knowingly pairs adults and children for video sex chats. omegle is going to tell us it was her fault, and that it has no duty to manage its platform because section 230 says it doesn't have to. a terrified young man enters my office. his ex-boyfriend is impersonating him on the hook up app, grinder. he has sent hundreds of strangers to my home and my job, he tells them i have rape fantasies and if i protest it is , part of the game. matthew says he has done everything. he has gotten an order of protection. he has reported the abuse to the police 10 times. he has flagged the profiles 50 times to grindr, and they have done nothing. so we get a restraining order against grindr, to ban this malicious user. grindr ignores it. the strangers keep coming, following matthew into the bathroom at work, waiting for him in the stairwell of his apartment building. over 1200 men come. in her order, throwing matthew's case out of court, the judge said grindr had a good faith and reasonable belief that it was under no obligation to search for and remove impersonating profiles. that good faith and reasonable belief comes from section 230, it is actually used to justify why they do not have to moderate content. exactly the opposite intention of what congress intended. so the men keep coming for another 10 months after we brought our case. as many as 23 times a day. and grindr knew the whole time. over the past six months, i have met with seven families, each whose child was killed because of purchasing one fentanyl laced pill. when i say catastrophic injuries, it is not hyperbole. the traps are set by internet platforms, which have profited beyond any summit of wealth and power in the history of the universe. now, i am not arguing to end the internet or any of these companies, or to limit free speech. the nightmares my clients face are not speech based. we must distinguish between hosting defamatory content versus enabling, profiting off of criminal conduct. and for hundreds of years, our civil courts are how everyday people have gotten justice against individuals and companies who have caused them injuries. it is the great equalizer and , that basic right is gone. we have a mess here that one congress created, but that this congress can fix. and i look forward to more questions and hopefully to talk about my ideas for reform. thank you. ms. goldberg: thank you. mr. wood. mr. wood: thank you for having me back. chairman doyle, i must thank my hometown congressman for your kind attention to my input over the years. since the last time i had the honor to appear before you. today's hearing proposes to what holding big tech accountable through what it describes as targeted reforms to section 230. that framing is understandable in light of testimony you just heard from others here about the harms the platforms allow or cause. free press action has not endorsed or opposed these bills, we see promising concepts in them, but because for concern, too. that is because section 230 is a foundational and fully necessary law. it benefits not just tech companies, but the hundreds of people who use their services and share ideas online. that is why congress must strike the right balance, reserving the powerful benefits of the law, but considering revisions to better align core outcomes with the statutes in plaintext. section 230 lowers barriers to people posting their own content ideas and expression without , needing the preclearance if platforms with demand if they could be liable for everything they say. this law protects platforms from being treated as publishers of other parties' information, into and permits platforms to make content moderation decisions while retaining that protection. section 230 encourages at the opening exchange of ideas, but takedowns of harmful material. without those protections, we risk losing moderation and use -- risk chilling expression, to. that risk is especially high for black and brown folks, lgbt people, immigrants, religious minorities, dissidents and all ideas that could be targeted for suppression by powerful people willing and able to sue just to silence statements they do not like. but as you have heard today, members of those same communities can suffer catastrophic harms online from platform conduct to. -- too. so it is not just in the courtroom that marginalized speakers must fear being silenced or harmed, it is in the chat room, too. in social media, comments sections and other interactive apps. repealing section 230 outright is a bad idea and it would not fix all these problems. we need privacy laws to protect against abusive data practices and other positive civil rights protections are applied to platforms. without 230, there may be torque remedies for criminal sections in some cases, and a few cases for underlining content, but no remedy for amplification of underlined speech is protected by the first amendment. while the first amendment is a check on claims of speech, and a constraint on speech towards defamation, those per se are not unconstitutional. 230's current text should allowed injured parties to hold platforms liable for their own conduct, and even for conduct that the platforms create when it is actionable. and courts have let some suits go forward for platforms posing there underscore mccoy questions, for layering content over user posts and encouraging them to drive at reckless speeds, or taking part in transactions in ways beyond letting third-party sellers merely post their wares. but most courts have read it far more broadly, starting in the zarin v. aol, which held the prohibition on publisher liability pretty -- precluded distributor liability, too. even once a platform has actual knowledge of the harmful character of material it distributes. people ranging from justice thomas to professor jeff cost agree this is not the only plausible reading of 230's plaintext. decisions like zarin prevent plaintiffs from liability for the platform's conduct, not just their decision to host others content. that is why we are interested in bills like representative banks's hr 2000 or the senate's bipartisan pact act, clarifying the meaning of the 230 present text by reversing zarin or allow suits from platform conduct, including continued distribution of harmful content, once the platforms have knowledge of the harm it causes. while bills like yours take aim at that same audible goal of deterring harmful implication, we are concerned to some degree about legislating the technology in this way. it could lead to hard questions about definitions and exemptions, rather than a focus on a provider's knowledge. and liability. we do not want to chill amplification that is beneficial, but we also do not want to prevent accountability when platforms actions cause harm, even in the absence of personal recommendations or outside of -- for important subjects like civil rights. the fact that a platform receives payment for publishing or promoting content could be highly relevant in determining its knowledge and culpability for any harm the distribution causes, but monetizing content or using algorithms should not automatically switch 230 off. unfortunately, the safe tech act tipped even further toward the chilling effects we fear by risking any broad change to 230, and risk of those protections any time platform receives payment at all, yet by dropping the liability shield the platform serves the request -- of relief. we look forward to continuing this conversation on these important ideas and your questions today, and the legislative process going forward. chairman doyle: ambassador kornbluh, you have the floor. hon. kornbluh: thank you for this opportunity to testify. thank you, chairman doyle and committee ranking member rogers, and committee members for the opportunity to testify. i will stress three points today. first, that the internet has changed dramatically since the rules of the internet were written. section 230, c1 must be clarified or we will lose protections and rights that we take for granted. three, it's long past time for regulations to be updated to limit harms and protect free expression. section 230 was prickly -- was critically important in allowing the internet to flourish. section 230, c2 remains essential to encouraging service providers to screen and filter dangerous third-party content. however, the internet is no longer the decentralized system of message boards it was when to 230 was enacted. social media companies differ in scale from 20th-century publishers, as we heard earlier facebook has more members than most major religions. more important, their design makes them an entirely different animal. they offer the most powerful advertising and organizing tools ever created. the use vast amounts of personal data to tailor the information users see. and they are not transparent to the public or users. and meanwhile, our economy, politics and society have moved online in ways never imaginable. facebook and google account for an astonishing half of advertising dollars and teenagers may spend an average of three to four hours a day on instagram. our elections occur largely online, beyond public view. significant harms -- fro the status quo arem evident from a few examples. a covid conspiracy film was shown more than 20 million times in only 12 hours before it was taken down by all major platforms. families of victims of terrorist attacks alleged terrorists used platforms to facilitate recruitment and commit terrorism. and the facebook papers show the deliberate use of algorithms to lead young girls the content promoting anorexia. unless section 230 is clarified, we will grow increasingly less safe and less free. broad application of section 230, c1 has precluded for more responsible behavior by large platforms. as revealed in the facebook papers, the company rejected employee ideas for changing design flaws that would've limited algorithmic harms. in addition, the outdated rules pose a national security risk on -- when foreign agents and terrorists can use the platform's tools to recruit and organize. that's why judges in terrorist cases, civil rights civil rights organizations and children's safety groups are asking congress to act. the bills under consideration by the committee would rightly peel back community when social media platforms promote the most egregious types of illegal content that produce harm. hr 5596, the jama act in particular, would incentivize platforms to reduce the risk of potential harms to children, victims of harassment and violence. hr 2154 would incentivize them to reduce the risk of international terrorist use their sites to organize. and the third point i would like to stress, regulations also have to be updated. it is not enough to have the liability. there is not always a plaintiff willing to sue, even when there is societal harm. companies lack guidance of what is expected of them, so regulatory agencies should provide clarity. bipartisan ads act would require the same transparency for online campaign ads, as required on broadcast tv. this should be extended to include know your customer provisions so that dark money groups are unmasked. the federal trade commission should require data to shed light on large platform practices. the equivalent of like a black box data recorder at the national transportation safety board gets when an airplane crashes. we do not have that kind of data after an election, for example. in 2016, the only reason we know what happened is because of the intelligence committee had the platforms to work over the data and we learned about voting -- and we learned about the targeting of african-americans. the point is, we should not need a whistleblower to access data. in addition, regulars could oversee platforms developing best practice frameworks for preventing illegal and tortious activity. and courts could refer to in deciding a company is negligent, as my colleague proposed. this effort could be made consistent with proposals in the eu draft digital services act. mr. chairman, it is essential to update rules, as the internet continues to change. otherwise, key protections our country takes for granted may become irrelevant. thank you. chairman doyle: mr. lyons, you are now recognized. you may need to unmute. you need to unmute, mr. lyons. move on? ok, we will go to mr. volokh. you are now recognized. then we will go back to mr. lyons. can you unmute, also, sir? it looks like your microphone is not connected, we are being told. [laughter] should we go to dr. franks? ok, we will go to dr. franks while our remote witnesses get their technical issues fixed. you are recognized for five minutes. dr. franks: you have heard mixed accounts of the nuances and complexities of the section 230 debate. and it is incredibly easy to get lost in them and let the perfect be the enemy of the good. you have heard in testimony that so much irreparable damage has been done already. because of the tech industry impunity, but congress has an opportunity right now to avoid future harm. and it is vitally important that we keep the future in mind as we are thinking through legislation and reform, because any solutions we have for today need to be able to address our current crises of disinformation, of exploitation, of discrimination, as well as being nimble enough to respond to the evolving changes and challenges of the future. at the most fundamental level, the problem with the tech industry is the lack of incentive to behave responsibly. preemptive immunization from liability provided by section 230 means the drive to create safer or healthier online products and services simply cannot compete with the drive for profits. as long as tech platforms are able to enjoy all the benefits of doing business without any of the burdens, they will continue to move fast and break things, and leave americans pick up the -- average americans to pick up the pieces. section 230, c1, the provision responsible for our current dystopian state of affairs, creates what economists call a moral hazard. when an entity is motivated to engage in increasingly risky conduct because it does not bear the cost of those risks. the devastating fallout of this moral hazard is all around us, an online ecosystem that is flooded with lies, extremism, racism, misogyny, fueling off-line harassment and violence. one of the reasons that the section 230 debate is so challenging is it is backwards. the question should not be, what justifies departing from the status quo of 230? the question should be, what ever allowed the status quo to exist in the first place? we should be demanding an explanation for the differential and preferential treatment of an industry that has a rate -- has wreaked havoc on so many lives, reputations and on democracy itself. everyone of us in this room right now would face a liability if we harmed other people, that is not only if you caused it directly or acted intentionally, we could also be held accountable if we contributed to the harm, and if we acted recklessly or negligently. that is also true for businesses. store owners can be sued for not mopping up spills, car manufacturers can face liability for engines that catch on fire, hospitals can be sued for botched operations, virtually every person and every industry faces the risk of liability if they engage in risky conduct that causes harm. that is good and it is right. because it avoids the creation of moral hazards. the possibility of liability forces people and industries to take care, to internalize risk and prevent foreseeable harm. there are those section 230 defenders who will say that the tech industry is different, it is not like these other industries because it is about speech, and speech is special and it deserves special rules. there are two important responses to this. one, the tech industry is not the only speech focused industry. speech is the core business of newspapers, radio stations, television companies, book publishers and distributors. speech is integral to many workplaces, schools and universities, yet all of these entities can be held liable when they cause or promote, and even in some cases when they fail to prevent harm. none of these industries enjoys anything like the blanket immunity granted to the tech industry. the potential for being held responsible for harm has not driven any of these industries into the ground or eradicated free expression in these enterprises. second, 230's immunity is a vote to protect far more than speech. people use the internet to do a variety of things. they do it to shop for dog leashes, they sell stolen goods, they pay their bills, renew their drivers license, the section 230 allows intermediaries to be immunized, not only for speech provided by others but for information. this has allowed tech platforms to absolve themselves of responsibility for virtually anything anybody does online, a protection that goes beyond anything the first amendment would or should protect. the current interpretation of section 230 immunity is an unjustifiable anomaly that flies in the face of legal and moral principles of collective responsibility. three changes are necessary to effectively address this. number section 230's legal one, protection should be limited to speech, not information. a recommendation that is reflected in the safe tech act. two, as many as the reform proposals before this suggested in some form, those protections should not extend to speech that an intermediary directly encourages or profits from. number three, section 230's protection should not be available to enter mediators that exhibit indifference to unlawful content. these are the essential steps necessary to change the perverse structure of the tech industry that exist today. thank you. chairman doyle: thank you very much. we have votes on the floor. we are going to check to see if -- what's that? we are going to check to see if we have any republicans on remote, because i am willing to stay and get the last two done. ok, we are going to take a recess and we will be back right after our votes. sorry about that.

Related Keywords

Whitehouse , District Of Columbia , United States , Washington , Americans , American , Michelle Singletary , Mary Anne Franks , Carrie Goldberg , Eugene Volokh , Matthew Wood , Daniel Lyons ,

© 2024 Vimarsana

comparemela.com © 2020. All Rights Reserved.