comparemela.com

Panel today. Discussing the topic of social media content moderation. I want to begin by thanking our cosponsors the session on [indiscernible] i will also announce that this is being recorded. Additionally, we welcome our Live Audience on cspan. We will allow for audience questions at the end of our session. 20 minutes of time i will invite you up to the microphone if you have any questions for our panelists. Additionally, a business meeting will follow. If youre interested you can come speak with us at the end of this panel. I would love to introduce to you our distinguished panelists today. Immediately to my left is kate. An assistant professor of law at st. Johns University School of law. She is a fellow at the yale law school. Her research a Network Freedom of expression and private government have appeared in and [indiscernible] in popular press. Then we have an assistant professor of law at drexel university. She is also a fellow at the center for democracy and technology. Her work involves [indiscernible] government accountability. To her left is emma. Of thea board member Global Network initiative. Came out ining work early 2020. This paper was part of a transatlantic group on moderation, online and freedom of expression. Annenbergrt of the institute. I would like to welcome eric goldman. He is codirector. Valley. Iced in silicon in his research and teaching, it focuses on [indiscernible] he has a blog on these topics which i find helpful. He has a big project forthcoming title validating transparency reports. I welcome our panelists and i asking,ke to begin by tell us what content moderation is. What he platforms moderate . How are these decisions made . Thank you for the introductions and for having us here. I am excited to kick this off and laid reef groundwork about content moderation private spaces. Content moderation and i will speak broadly and let erik and others set the Legal Framework for what allows us to take place worried content moderation from a private platform standpoint which is to focus on the main platforms for speech like ,witter, youtube and facebook really all platforms use content moderation. Kickstart her has certain roles about what kinds of projects you can decide do crop fund. Tsy has rules as do ebay and airbnb. All of these things are starting to become obvious to us now. Its 2016, there has been a change in people understanding that have a platforms are working round the clock and constantly to take down or keep up the content posted by their users. That is part of what you would think of as the community they are trying to build for the product they are trying to sell. Content moderation and the rules about theare as much product is any Given Company now or any given platform as they are about a certain type of users right to speech. Those are the competing ideals that this idea has grown up under. The best way to explain content moderation is that it happened and i think everyone can attest, it happened for a long time without people realizing it was happening. In 2015 when i started to dig into this research, i found that every conversation explaining my work had to begin with you dont realize it yet things that you put up on facebook are sometimes taken down by facebook he caused they violate a rule is because. Point, that was not clear to most people. Since then, there have been a number of highprofile moments that have gotten media attention. There was the terror war controversy in 2016 which is a picture posted by a famous author. The photo is sometimes called napalm girl. It is a nineyearold girl flame flinging napalm in the vietnam war. Facebook had removed it because it was sexually exploited imagery of a child. It had a huge Historical Impact and protest impact and educational impact but none of those things were considered so facebook came under fire for censoring that have of content. Similar to that was thinks they got wrong because they took things down but similarly, they get things wrong with it take things up. Keep things up. There was a mentally ill man who is taking a video of himself shooting an old homeless man in the street pointblank. And posted it on facebook and estate up for 2. 5 hours for facebook removed the video. That got a lot of negative press. It these moments are moments in which we are becoming aware of the rules and ways in which facebook and Companies Take down this content at scale. The more we learn about what the private process entails and the legal remedies we can start to put into place around it, thats where the conversation has to begin. How are these decisions being made . I would like to go back to the definition. This is central to the entire panel and the conversation. There is a temptation to think of content moderation as something other than editorial judgment about what content should be published from third party. That is a divide in the conversation. Some people will not be shaken off the belief. It is not an editorial decision it is Something Else. To me, it is squarely an editorial decision. They are deciding which thirdparty content they are willing to publish in which they are not. At which point legal consequences flow. Where do you come out or where are you starting today . Are you thinking of it as an editorial process, maybe it is more automated than we are used to, maybe it is more afterthefact as opposed to prepublication but it is still the same basic process or not. I will point out a case involving a yelp review. The Appellate Court described the yelp decision not to review as speech administration created it was not an editorial judgment, they were the janitors. They were pushing brooms. Pushing decide they are brooms, you can regulate in a different way and say of course it is editorial decisionmaking. We know how to regulate it. Frameworkt moderation leaves us astray because it allows us to think like were not thinking about policy decisions but if we are we know what to do. I want to throw in another dimension for people to think about. A lot of times in our conversations about content moderation they can be bound up in horrific things that have happened on major platforms. Were talking about terrorist propaganda on youtube, disinformation on facebook. When we talk about content moderation more broadly, we are talking about a lot of techniques and press practices rules and procedures that happened on facebook and youtube and on the smallest message board and forum and mailing list. As we discussed, it would be helpful to unpack that there can judgment associated with different kinds of content moderation and its helpful to take a step back and think about what we talking about is something that any host of thirdparty content has to grapple with. They have to come up with roles contenty might apply to on their website. They had to figure out how to apply them. Do they proactively seek out content and try to take it down according to the rules for do they wait for people to notify them . There are different ways that they have made these decisions that are they have benefits and drawbacks create it is important for us to think and Public Policy terms about the implications of how some of the Biggest Companies online have made those decisions that impacted having on our civil discourse and we should talk about that. Lets not forget the long tail of many sites out there grappling with the same issues and it can be helpful for unpacking why these questions are as difficult as they are. To thicken the custodian versus editorial distinction and , i will say to get to the question of how does this happen . Tos say you upload a photo facebook and there is a microsecond between the upload and it actually being posted where it goes through a check against a couple of things using it tool called photo dna just a matching service. There is a universe of known child pornography that is maintained by a thirdparty organization. They have done a photograph matching service. It is a digital mapping and they can instantly in milliseconds recognize whether that photo is a known piece of pornography that has been distributed. If thats the case, it never gets posted. It is automatically taken down red this is also has youtube works with content id and copyright. That is the preproactive screening. Once something gets posted, thats what i would describe as the custodial aspect. The part that is not an editorial judgment but just a screening. Muchit is posted, pretty most of these platforms rely because of the number of platforms postings on proactive reactive moderation. Other users flagging and reporting violations. Then they go into a queue and they are sorted by priority throughout rhythms but eventually most of them go to humid moderators impulse centers all over the world who have a queue of things they look at. A picture or image comes up and they have to decide whether it violates the rules and community standards. They decide whether it stays up or comes down or gets escalated. That is the content moderation system. A vast majority of the stuff that is flagged are things that are not hard calls. They are basically teenagers having reporting wars with each other. Or people who are angry at their neighbor so they decide to go through all of their posts and flagged them. Or people decide they dont like how they look in pictures. Rather than on tagging, they decide to fly to flagpole the jury for removal. Other graphic kinds of hetent, some bad content and gets past the initial custodial sleep at the beginning. Those are the hard questions that are the editorial judgments. [indiscernible] govern iniple should terms of fairness, accountability transparency . I would like to kick this question off with professor [indiscernible] noterst it is important to as background as eric mentioned, if you subscribe to the idea that content moderation is the exercise of editorial discretion, under the First Amendment framework, a whole bunch of consequences flow from that including that the externallyor articulated ideas about principles and safeguards that ought to apply it might be called into question. Some of the accountability fairness and transparency oriented ideas that are floating around about how to make content more transparent and accountable to the users and the public into regulators might not fly under the First Amendment framework or a framework that subscribes to this editorial discussion model created a lot of these interventions are not originating within the United States. Many of them are originating within europe. To some extent, we have sidestepped that inquiry entirely because other governments and actors are engaged in formulating these principles and safeguards. Lets start with transparency. What do we want to know about content moderation . Respects, it has been done through the use of reports and disclosure of aggregate data about content takedown decisions. How many pieces of these are generated content are taken down for violating specific aspects of Community Guidelines . How many pieces are taken down because a Government Agency requested removal . How many pieces are taken down because an algorithm proactively flagged this pieces . Ultimately, how many of this pieces of content are restored after that initial takedown . Those are the kind of aggregate statistical data that are often conveyed in transparency reports. Increasingly, folks who work on these issues as researchers or advocates have expressed a lot of frustration that this kind of aggregate data is not enough to understand how these decisions are being made. Aat you really need is granular casebycase analysis of how the rules are applied to a given fact and how that decision is made both in the cuminmoderator context human moderator context. Making this granular data available, it will be platformsg to see how navigate these precious moving forward. Another transparency related issue is publication of the Community Guidelines and the rules and standards for enforcing them. People tend to forget a lot of the time that something most interesting Important Information we have gotten about how platforms and force their roles have been through leaks to the press. For example, a couple of years ago, the guardian published a facebook wasw training its moderators to enforce the rules. Some would argue that the rules and guidelines should be public from the start. Transparency is also an important aspect of accountability. What we know about the rules informs the way we might want to structure or think about accountability mechanism. One way to think about accountability in this framework is accountability through the marketplace. Mightcal pressure encourage platforms to adapt different rules or to enforce them differently. For a long time, this has been a dominant way of thinking about platform cap ability. If you dont like the rules, pressure the platforms to change them or leave the platform entirely. Increasingly, i think there is also interest in formulating different modes of accountability. Thicker kinds of accountability. There are private forms of accountability mechanisms and kate is an expert in facebook Oversight Board. There are other ways of thinking about private accountability created one proposal is for multiple stakeholder regional or statebased social Media Councils that would represent many different types of actors like Civil Society platforms, users and governments on content moderation issues. This is one way of thinking about private accountability moving forward. Puzzles forhere are accountability through Government Intervention either through the formulation of private rights of action and judicial oversight or administrative oversight. Wheree this in the u. K. The duty of care proposal includes a provision that would encourage administrative oversight of how platforms are operating. Is probably the hardest to achieve. I dont even know what it means in this framework. Platforms when they are talking about having fair sets of rules are often really talking about having consistent rules. Rules that are applied consistently within groups. If person a post white nationalists on social media and they are taken down, person b should not be able to post that. It might mean that they are consistent across groups so that different kinds of prohibited speech that are equivalent late bad should be treated the same way. Think there are major questions mostly political and social questions not legal once about how we think about what fairness means in the context. Following up on the transparency piece which is a very common intervention that gets discussed when it comes to content moderation, let us see more. Let me make a short plug for my next paper. I invite your feedback because im struggling with this project. How do we know that the numbers provided by the Internet Companies are true . They publish them but how do we know . What do we do to validate those numbers . Can we send in the lawyers or the accountants under government compulsion to validate the numbers . If we are going to expect transparency, we have to know if we are accurate numbers and if there is something effort about the transparency that might be obligated under the content we areion piece, then talking about editorial processes it very well may be different than other disclosures the government requires were andch issues are in play the intervention in the government to validate the numbers might not be as intrusive into the editorial function. I would invite your feedback and thoughts about this. Make thefor us to transparency option work, we have to know if we can believe in the numbers. Im not sure how we are going to do that. Idea on how to validate ,ransparency reporting numbers it was around the time of the snowden gleeks that tech theanies began providing response to government demands on user data. They werepeople handing over gobs of data to the u. S. Government or many governments around the world, Company Started putting out numbers about demands they got. Many years of pressure from to get the companies to start putting out numbers on their terms of service or content moderation transparency. We have only seen a couple of reports from the Biggest Companies in the past few years. One issue with understanding these numbers is we were very nascent in the history of transparency reporting. There is not a lot of standardization across how the companies do the reporting which is on the one hand infuriating to researchers because how are you supposed to use these numbers to draw conclusions or compare them or compare them to the same company across multiple reports . They often change their reports every six months or year. Theres also a lot of good reasons why they change their reports because they have developed a new capacity to report different information. They found a different way to start accounting for something. Issues wesome of the will have to work out as people pull Public Policy discussion whether it is voluntary or mandatory moves forward. Area to has potential, the government demands reports, we could try to compare facebooks reports on how many demands a get from the u. S. Government to information from the government about how many demands they put into a company like facebook. Are some i know, there places within the federal government that do some reporting on those types of demands. Theres nothing like a single aggregate report from the u. S. Or any other government about this is how frequently our Law Enforcement agents are making requests to these Technology Companies for content removal or access to user data. Part of the question on the government demand section is that there is a Real Asymmetry as far as who is doing transparency reporting reedit that means that kind of objective comparison of numbers from one side to the other isnt really possible. As for Companies Reporting on their own behavior, short of some sort of thirdparty auditing, it is hard to see where the reliance on those numbers can come from because there is no way within looking at a single report to check the numbers against each other. There is no transparency in decisionmaking. There is no transparency in rules or enforcement of those rules. Its not even like you can give anyone a tally of how many things you took down. More interesting is when they tell us they are changing the rule around bullying and for instance, i was consulting with facebook and they wanted to create [indiscernible] a [indiscernible] a person who is under 13 years old who became virally popular and they wanted to create rules around it reedit the project never happened and they never released the rule even though i know it went to [indiscernible] how often does that happen . How often do something happened that pops up and their framework . How often does an exception get made on a particular kind of highprofile content by a person . Or someone who is connected to facebook than that changes the decision. That doesnt actually impact how interprets going forward. It does speak to the point about fairness. Thee techt for me but of approach. I think the other type of thing that i have been wrestling with is supposed to be in theory a new independent thirdparty that facebook is standing up. We will see if that happens. Question that i have been struggling with is how much whenparency do you want your Institution Building . How much transparency is good . There are tradeoffs of transparency red theres a lot we dont know about how the Supreme Court works that we trust that it does. You have to start challenging the priors of why you trust organizations or institutions and why we distrust other ones that are currently being created. Along the lines, it seems like given the challenge of scope, over one billion things are posted every day on facebook alone. In 2018 when Mark Zuckerberg testified to congress, his answer to many of the questions was i am building an ai tool for that. Theres an app for that. , want to ask the panelists what are the benefits and risks of moving content moderation more into an automated algorithmbased process . In particular, is this the right answer to dealing with fairness and consistency or what are the risks . I focused mostly on the risks. [laughter] studying then working in the seeing allany years, the way back to the 90s with the advent of the commercial web in the questions around filtering. Can congress or the eu require hosted content like aol to use wasers to keep, the concern indecent content off of their services. There were a lot of legislative and court fights about these questions. There was a lot of question about should we protect children from inappropriate material but at what cost . On both sides of libraries and other institutions to get funding. No mandatory filtering obligations because filters were going to be overbroad and underinclusive and sweep in material that wasnt what was the target of the filter and that was going to be a significant burden on peoples rights to freedom of expression. All of those kinds of assessments of filters, keyword filters from the late 1990s still apply a. I. Of today. When we are thinking about more sophisticated filtering technologies, there are technical exathes that exist now that were only a dream in the late 1990s. But the risk of underinclution of your filter not working well and they have significant impaths, all of that is there. We might be talking about it as a. I. , image recognition, all of that is fascinating and theres a lot that can be done with those technologies, but those fundamental questions about how often is this tool getting a wrong result and what are the implay occasions of that, we still need to answer that. And there are no easy answers. And kinds of automation that are used. Kate touched on one technique that is wildly widely used. This is the underlying echnology for photo d. N. A. For identifying child sexual material, a youtube tool for protecting alleged copyright intringing work and also for the shared hash data base to Counter Terrorism which is an industry Led Initiative sfoke cussed on violent and extremist technique and involves on fingerprint of a digital file and scanning new uploaded files. This is not a kind of sophisticated analysis of the content in the way that is looking for new instances of sexual abuse material or terrorist content. It is about saying, is this something that we whoever is running this hashing tool have identified as material we dont want on our servers. And it can be sophisticated as far as how it analyzes images. And one of the earlier hashes were easier to circumvent. You could generate a fingerprint for that image. You resize an image and looks like it is brand new. One of the big innovations was hat its a much more sophisticated kind of hashing that is a technical process to make it robust to different kinds of manipulations of the image that might occur so that you can change the color and change it to black and white and the tools will detect a match. So that kind of automation for identifying content is very useful in circumstances where there is no context in which you want to host that material. Child sexual abuse materials is a perfect example. In countries around the world, there is no exception for journalistic uses. The possession or distribution of this material is illegal flat out. In that circumstance, i seen this before, i dont want it, take it down, can work relatively well as well as minimizing chances of false positives. It gets more complicated even when we go to copyright or terrorist propaganda. This might be exactly the same that is being uploaded but used in academic or transformative use where it might match the reference file but not appropriate for the content host to say, no, never, you cant post that material. And i think this idea has taken hold in a lot of policy discussions around this technology. Machine learning is very powerful and a. I. Is powerful and will create an ought motive tool that can have hate speech is a big example particularly in europe. There is focus and so much hate speech online and people are creative of how they do hate speech online. There are new kinds of hate speech, whether its words, images or videos, people are cre iting different content. Then you get into an area like natural language processing to use Machine Learning to determine if we can detect what makes this statement hate speech as distinguished from all of these other statements. And theres some interesting work being done on that, but a lot of the tools even today that are developed with an aim of detecting hate speech or toxicity, they have 80 accuracy rate which is amazing in the field of natural language processing and you publish a page on the back of that and that is a solid result that you found but when you are putting into practice on a platform that gets billions of posts uploaded every day and you have 15 to 20 error rate, that is one out of five users who are very upset who were told they were posting something that was hateful or toxic or shouldnt be on the platform. And we could talk about the pros and cons of automation but stop there, there is a lot, especially the bigger platforms but smaller platforms are trying to use automation to help them with sort of identifying whats the kind of content we probably should have a human look at and not automating the Decision Making about this content needs to come down but saying this should go into cue for review but the tool thinks this is hateful or seems to be falling into this bucket of harassing or toxic speech and have a human look at it and apply that a human does around context and language choice and relationship between people that will actually really determine whether something is harassment or just a tasteless joke between friends. Going to that one in five error rate, the next question, what are the remedies . For those who feel like something is wrongly taken down, what are their options since this is a private scheme of gompance, what do we do . Im going to ask a different question. Im a law professor. And the paper im working on and it is a draft and if anyone is interested, i would be happy to share it with you. The rule violation could be based on statutes or the private house rules that the company has adopted, what happens next . The standard meant to model, lets take it down. And content comes down. We are always playing with those two. The paper im working on says there is a whole lot of stuff that could happen. What do we do . What are the range of options and still might address some or all of the problem. I developed about three dozen things that sites can do. And the next part of the paper is to talk about why should a site choose between the two different options and choose not to remove or choose something less and prioritize along this tool kit. And so the paper describes a variety of different levers that are in consideration. Heres the single one answer to that question and thats one of the difficulties i struggled with, i cant say in x circumstance this is a better option than removal or that option a is better than option b for removal in all circumstances. The best i can here are a list of variables and choose what might better fit the norms or their community. We see diversity among companies and that diversity has a lot of potential and having more remedy options would give them the possibility to decide which kinds of things they prioritize or deprioritize based on their community and editorial standards. When you see a circumstance where the major Internet Companies are reaching different outcomes, that should circle. That is really interesting and that there are multiple remedies and usually means there is Something Else going on so they are not reaching the political advertising. And we can reach a different conclusion about what to do if there is a violation and that makes us question, why are the options and could we get more profit not in a financial sense, from trying to play with those options to allot the possibilities doing Different Things. From a user perspective we talked about how some of this happens perceived as market forces. User experience we dont want to see child porn and terrorist meeps, et cetera. Enforcement for the user itself . What does the average consumer get to do besides hopefully exert vague pressure on a company to do better . What are our options in this scheme . It can depend on the platform on can you appeal to the platform and ask them to restore your post or restore your account. I know for a couple of the platforms, it took years to even get the opportunity to have a post reinstated, the answer to an appeal was, well if you think the content that you posted that we took down doesnt violate, try posting again and see what happens and see if we dont take it down next time. Ta is the function of an appeal. If not a prior restraint or a total ban on you ever posting that content, but its something and a far cry from how we thought about remedy in the First Amendment, local context where the idea that you would be silenced unconstitutionally is challenges. W pre most of the media social media platforms allow if for most of their policy violations for people to say, please take another look at my post or reinstate my account. But if you dont succeed in your appeal, especially here in the u. S. , thats pretty much the end of the line. You have the option of trying to create another account and hoping the service doesnt find you and decide under their no account recreation policy to take down the new account. S far as legal remedies go and we will get to it entirely, cant actually take action against the platform to say by law, you must reinstate me. We are seeing dozens of those kinds of lawsuits where they are suing services for adverse action against the users. They are failing and for many different reasons. They are effectively trying to impose must carry obligations on internet services. They are saying you must take your content that at your editorial discretion you dont want to publish. If you accept the premise that the process is mandatory judgment, the answer. You cant impose must carry ocean obligations on publishers. The remedies available almost patterned about judicial remedies. Often we are talking about the appeals process within a platform for appealing the determination that a page or account violated the terms in some way. But in some respects they are fundamentally different than those kinds of remedy and i want to point out one way in which hat is true. Does anyone have standing to appeal that determination to opinion tress. The answer is plainly no. Nobody can challenge that there are ration certain decisions that dont occur in this user frame that are difficult to talk about individual remedies for. And i highlight to say that is a systemic issue where we conceive these patterned in which both sides are represented, often when we talk about platform remedies, only one side is represented, usually the user and in cases its not. Take the right to be forgotten. When an individual requests that search result be taken off of googles search result page because it violates their right to be forgotten. The user who posted that or created the web page that violates the right isnt represented in the proceeding before google through which that determination is made. So we think about these remedies as patterned after judicial remedies, but in a lot of ways they are fundamentally different. And i think it is useful to think when and why that might be the case. And we have talked a lot about this recently. I would think about it in terms of standing, at least for the major platforms, not for search materials. I think that is different. But in terms of individual remedies, there are two types of harm. There is the harm in which you post something and you think you have the right to post this thing and right of the government of this prior platform that is not the government or maybe if you are like and happens all the time, in europe especially, there is a defamation suit and then the government does make the platform. Those are the ways that you get remedied. If you have something in your defense, something that was said about you [indiscernible] the other way that you are harmed if you have said something about you on this site and you can petition to have it taken down there is no set of appeals on the latter right now on most web sites. If you want something taken down and they decide the content is going to stay up and you think it raises antisemitic orographicically type of content and they say there is not. Is it still harmful for you . They mute it or do this penalty of shadow banning that eric was referencing all of these different levels. One nk its not a one to relationship and the role the internet plays in on the only being able to speak freely and have the complete access to knowledge. Those kinds of delineations make the stakes different and make it hard to spell out individual remedies. I think thats a really important point that kate just made of what we are packing into maybe a single content moderation system on a platform or the content moderation system for user content and then for ads is a combination of interpersonal dispute resolution and Company Brand protection, do they want to be a a platform that hosts this content and traditional Media Questions as to what is the editorial percent perfect i have of the platform and what allows people to post like hate speech or disinformation. It is a lot of different kinds of ways that we have seen communications and media be shaped in the preinternet world collapsed and the rules that the companies create to address Something Like interpersonal dispute resolution may not make such sense when we apply them to the bigger questions or questions about what is fairness as far as ranking in your news feed or in search results. I think with that, we discussed access to knowledge and social media is the way to have access to more knowledge and perhaps given voices to marginalized persons who do not have a platform. This is and what is the interplay given this is a private system of governance, are we promoting First Amendment free speech rights or hindering them . What is the interplay, i guess is this for the greater good . Lead us . Ssor would you social mimi yeah platforms give voice to platforms to people that wouldnt reach an audience and no effective way to run the major platforms that we have been talking about without some form of content moderation. Obviously moderation is speech enhancing because it is a precondition to the existence of these platforms and are the ays that platforms engaging in motivation enforcing those rules are those speech enhancing . And i think reasonable minds can differ about that. There is a fundamental problem in which the aspect of moderation that find speech enhancing, other people can be consider hate speech. Hate speech is the obvious example, it tends to silence some users, mostly women and people of color and inhibits their ability to engage on speech of a platform of their choice because it targets them. So arguably silencing hate speech is speech enhancing. That is not a First Amendment argument. That is a failing exam, right . [laughter] if you look to human rights law its defensible. I think in a First Amendment sense, i think there is sort of predicate question of whether platforms should be articulating First Amendment type of rules in a global context. Facebook calls itself one global community. We would find it ironic and more ironic to think that the First Amendment was being applied to communities atlarge. This question of inhibitting some speech is speech enhancing or speech promoting is a deep going g run that im not to answer. But its not that we have talked a lot. But its the relationship between government actors and platforms that really troubles me in this context. Increasingly what we see is forms of soft or getting harder pressure exerted by government actors on platforms to mod you late their rules in certain directions or change enforcement of those rules or develop new technologies to scale the enforcement of their rules and what concerns me about that is it provides the opportunity for backdoor collaboration between Law Enforcement and platforms in ways that are not amenable to public scrutiny or oversight and largely invisible to users and to communities who might be affected. I think that has a lot of collateral effect for surveillance and related to surveillance and a perceived by Law Enforcement to surveil increasing amounts of user speech and to me that is the aspect of content moderation that has gone through most overlooked because the primary frame in which we have aproferede it has been the sort of private governance model free from outside influence. Increasingly, thats not the case at all. We see governance all over the world trying to influence the way platforms make decisions and i dont think we would be quite comfortable with the results. This is not a hypothetical concern that hanna is talking about at all. In europe, there are both in the European Union level and in differing e. U. Member states, outfits called internet referral units are programs which Law Enforcement look for typically content many of them likely violates the law of u. K. , germany and netherlands and taking that through a Court Proceeding and get information about who posted it and bring charges, they send a referral to one of the social Media Companies to say we have identified this kept and think it violates your terms of service. Rather than as Law Enforcement operating within their authority under the law, they instead the see much greater freedom and speed which Tech Companies can respond to the content moderation perspective and flag things to companies for them to consider under their terms of service and voluntarily decide to review. I put it in quotes because that kind of pressure can potentially have all sorts of coercive effects and europe is in the mid of debating a content terrorist legislation which in some drafts considered putting into law that it was mandatory that you respond to the referral unit and explain why you didnt take it down why you didnt with the heavy presumption if you get a notification from Law Enforcement by content it ought to come down. As a concrete example of that line between what is against the law and what is a violation of a privately developed terms of service, when you have Law Enforcement as the grounds which they are going after peoples speech online, we are less in a position of holding either governments or companies accountable, which is the worst Case Scenario for all of us. It has been fascinating and heartbreaking to watch the freedom that a lot of these companies, the ones head quartered in the u. S. , they are taking First Amendment forward perspective saying we can set the terms the way we want to. It looks like more european messages of regulating speech, the Chilling Effect of hate speech and recognizing that and putting it under terms. But having the great leeway and used by governments outside the u. S. To say, your policies against hate speech are a lot broader than our national law and that means more can come down, i think is a really disturbing trend. Im going to talk about jacks try and. And i think it is particularly useful here. He talks about the idea that old school, the primary concern was the states and the reason we have the First Amendment. You were worried about the boot of the state on the neck of the citizen speaker. And you know, the internet changed that and turned it into a triangle. People no longer had to worry about being sensored and they could didnt have to have their speech amplified. And that still helps. But they could have this and run around this problem. Two things that have been happening and getting short stressed and why we are caught up in the tech lash. Which is one, i think exactly has riangle is becoming it collapsed between the size of government and Tech Companies and they are coming closer together. The other thing that is happening, just as individual citizen and end users run around the problem of citizenship by using platforms, basically you see companies or see these states starting to run around the problem of democracy by using Tech Companies. They are trying to enforce their laws of their nation state with court orders that apply to everyone through the tech company. So these were just a bunch of decisions and i think emma said it it came out of the e. C. J. And ind of very the defamation decision that was held up by the e. C. J. To take down and it was a witter fish and socialist. In austria got them to and they agreed and said, facebook, you dont need to take itare yon across, it was a suggestion, like we reserve the right to make you take this down everywhere. Is a huge move. I dont have democratic accountability to the eu, im not a citizen. They are coopting that through a site like facebook. It is not dissimilar for the access to knowledge idea happening with the right to be productive. Talking about Holding Companies accountable, between government and platform and platform and user, we alluded to this and discussed platform you unity. Very broadof immunity for platforms for thirdparty content. What is the role of section 230 in the landscape of content moderation . Professor, if you want to walk us through what section 230 means for this topic. The rules about content moderation vary widely across the globe. U. S. And isolate the talk only about u. S. Law is incomplete because of the collapsing of borders and extraterritorial reach of foreign law. Lets talk about the u. S. Because it does dominate the way major User Generated Content services work. They still to adhere to u. S. Norms and laws. Levels, one is the constitutional protection, the First Amendment, which is so deeply encoded in the way the companies are built that we take it for granted. It is how they start their thought process. Throughout the world. A lot of the objections to content online are about objections protected by the First Amendment. Many types of harassing or disparaging content may still be governments are perplexed. Why is this still here . It is that content. It might be legal content. The big frontier we will wrestle with his, can governments force Internet Companies to remove or otherwise address what we will call lawful but legal content, lawful sorry, awful but content. The governments are saying you shouldnt have that awful it ist in the u. S. , but protected by the First Amendment. Much of the terrorist content is protected. A bypass to the protections we factor into the First Amendment protected content. You can get it removed through private cooperation. When we talk about the Legal Framework for Internet Companies, so much rests on section 230, the law passed in sites arents liable for thirdparty content. There are exceptions and different rules than it comes to copyright infringement, that is in on thend partners scheme. There is federal criminal prosecutions that are not covered by section 230. Those have different rules. Otherwise, we are not at those outer boundaries. Section 230 says they are not liable for thirdparty content. The mind of many people, especially those rooted in standard definition defamation and privacy law. This is counterintuitive. It means takedown notices about content covered by section 230 can be in can be ignored freely. Doesnt change. Websites wouldnt be liable for thirdparty content. This sets the baseline for why we see so much awful content online. Websites have the freedom to publish if they are uncertain whether it is a legal violation or if it fits their editorial voice. They are allowed to publish it without liability. We can talk about that, not sure go down thatill path but let me find out a couple things section 230 does the dont, that we take for granted. Theis that, for all complaints about Internet Companies removing content that they shouldnt remove, without section 230 that would look worse by far. A lot of content is moldable in our information ecosystem that section 230 helps continue to persist. All lized voices can can often get access to internet publication in situations where they would have no access to other publication tools. The other benefit, i mentioned experimentation. Standardization problems if we are trying to do transparency reports. Different editorial voices in our system might fit things that would allow different communities to better actualize their potential. Alike and all look that is a feature, not a bug. Section 230 says some will be more or less restrictive. This is fair game under section 230 rubric. So there have been proposed the broad to limit immunity. I would welcome comments from the panel whether section 230 as it stands is the best approach or an alternative. Section 230 is doomed. It will not survive the current framework. The only question is how it will be doomed. Will it change the basic architecture . Will we look more like the european model where they dont call it censorship but there is lots of censorship in europe, we may and up with that. We have the First Amendment baseline but that doesnt allow the same extent of activity section 230 does. I have a paper on that. Dont want to belabor the point but section 230 is doomed. The question is yes, it what are we going to do about it . I think it is a difficult , about how much of content moderation is about either reducing the amplification or removing speech that is protected under the First Amendment. Think of most if not all on somephy available places but not on every website online. If the legal system changed so platforms had a duty to take down all illegal content but couldnt be required by law to take down lawful content, one potential implication is that you would see more pornography on every newspaper Comment Section and social media platform. That is obviously not the Direction Congress whats to head in and most people want the legal system around online speech to head. It gets us to the difficult question of whether and how government can incentivize or encourage removal or restriction or limitation on speech people have a constitutional right to say. This comes up in a number of different proposals around reasonable efforts or goodfaith efforts to do content moderation. Maybe that should be the hook for section 230, that website operators only get protection if they demonstrate they have been reasonably and how they do moderation, which is a fair way to think about it but you scratch the surface, what do we mean by reasonable content moderation . Atmy students would balk hearing the word reasonable used. The major social Media Companies, facebook, youtube, twitter, have enormous expertise as far as what mass scale content moderation looks like. There are also some academic experts out there, a variety of people who worked in the field and shifted into other goals but it is hard for me to see how legislators are going to come up with a more reasonable way to do content moderation than a lot of what we have seen from major social Media Companies. That doesnt mean we have to like or agree with or think the companies are prioritizing things as we would, and those are important conversations but as far as negligent, they are not taking down blatantly legal contin content, most sites are. We need to have these conversations, whether it is what section 230 looks like or what you want out of content moderation. What is not happening that should happen, and is that achievable on a mass scale content hosting environment . That last piece is the hardest to do. A lot of these questions and up involving very finegrained evaluation of different kinds of content or a tricky weighing of competing incentives where reasonable minds are going to disagree. , privatethat vein companies are getting the right big,lot of ways, one glaring challenge is political interference and the idea of misinformation, platform policies and political ads. , do you thinkink platforms can adequately address on their own without any sort of Government Intervention this challenge of political interference, misinformation, and political ads that might he misleading . Seeing a setht now of parallel experiments by major , what we think of as major social Media Companies, each coming to a different conclusion about how they will grapple with political ads in the 2020 election season. We have been in it for a year and a half, but it is 2020 now and there will be a lot of scrutiny on how these Companies Newly articulated policies are going to play out. I want to back up to talk about why the question of what to do about political advertising and disinformation and the intersection is tricky. Comes back to issues of definition. Some of the big challenges around clinical ads and disinformation is defining what an ad is. We know advertisements, vote for someone so, so and so, buy this toothpaste. There are obvious ads, but there are also people who want to get a message out and they pay to promote that content in whatever ranking a Recommendation System the platform has in place. You see people trying to sell you things on facebook and you organizations want you to read this important reporting they have done and they played to promote that into your feed because although you follow them and their publication of the platform, it may not naturally show up at the top of your feed and you may miss it. There are lots of different ways where money comes into content getting in front of people that are different from how we ically tank of advertising think of advertising, statements proposing a commercial transaction. Harder than that is defining political. What is the Political Part of a political ad . There is the easy category of electioneering or campaign ads that identify a specific candidate or office or race or referendum. It can be challenging to enumerate that for the whole country at the federal, state, local level but it is a decree discrete notable field. You could theoretically list every race and every candidate and anything about those people, that is political. The challenge but possibly doable. There is the question of issue ads. How do you define what is an issue ad . The sec definition is a political matter of national importance, including candidates for office, elections for federal office and any National Legislative issue of public importance, which is a lot. It ishealth, it is guns, education, it is environment, it is many Different Things that are being talked about in campaign specific circumstances and broader political circumstances and peoples everyday life and opinions and statements they want to make. When you talk about trying to identify political ads to put regulation, whether it is a law or Company Policy come around them, we have to recognize that it is a hard category to define. This rings us to the question brings us of the questions about consistency and fairness of application. If you cant get your hands around what this content is, figuring out a good rule to apply to it can be challenging. A couple examples of how this has been difficult as Different Companies rolled out ads, facebook was the first to try to tackle political ads. They ran into a definition ,odeled after the definition the sec definition of legislative importance and ran into things like bookstores posting a talk by cecile richards, the longtime president of planned parenthood, and the bookstore was told it was a political advertiser. Immigrants rights organizations proed to promote a post for bono lawyers for Daca Recipients and was told the people at the aganization needed to provide u. S. Issued id to run the political ad because that was part of facebooks verification system. That wasnt actually about, it the ad wasnt but about anything political or campaign related. Going into the 2020 election, we see three strategies playing out. Twitter prohibiting the promotion of political content, politicians and lobbying organizations cant run ads. They have tried to take a broad definition of what political ortent is, past, current proposed ballot measures, bills, legislation, if it has been a matter of political question, it counts and you cant do it. They had to build in an exemption for news publishers because news publishers often have about issues that political content. The requirement for news publishers is one to identify get intos, become, this special exemption and not include advocacy for or against the topics they are covering. An interesting effort, wet is it to look like, dont want to be a platform for political advertisement . , through the ads in search and display ads, is allowing political ads and it is trying to limit how they can be targeted. Previously, they allowed public votered on records and general political affiliations that i think google had concluded about users and now it will limit that and say you can only target of election ads for age, gender, and general location at the postal code level. They are trying to answer the question of disinformation as, lets take away the tool of microtargeting, which they emphasize in their policies, they never offered it anyways but they are walking back to say you can only target by a couple key categories if your ad follows into this political area. For facebook, there has been scrutiny over their decision or announcement of not Fact Checking political ads but they have the most permissive, they will allow ads about social issues, elections, politics but you have to go through a verification process. They have started breaking it down into having different articulations of what a social issue is for Different Countries and regions. In canada, it is a list of issues including civil and social rights, economy, environment, immigration, political values and governance, security and foreign policy, and for the u. S. They add crime, education and guns. The social issues we have been apparently canada doesnt. It will be interesting and important for companies to step be doing evaluations of, what are the impacts these rules are having . We sort of have an experiment, running in parallel right now with a ton of confounding factors. Some of the questions folks have raised about what these policies mean are, what does it mean for incumbent candidates versus up and comers . People who dont have a time of resources, arent celebrities, dont have a ton of money, things that matter in getting your voice out. Do these policies tilt the Playing Field for incumbents versus new entrants . How broadly, how does the broad definition of issue candor issues related advocacy that is not necessarily related to an election or a campaign . What does this mean about people trying to get the word out about the Climate Crisis . Does that become political . It is political but should it have the same rules electioneering ads have to face . This haveestion, will an impact on the spread of election related disinformation . It will have an impact on who can post what kinds of ads on certain platforms, but what we have seen in the studies in the Disinformation Campaign in 2016 and everything since then is that people out there trying to sow chaos use whatever tools they have available so there is an arms race to all of this. Seeing what impact the rules have on specific targeted efforts to manipulate an electorate or spread false information about how to vote definitely needed to be studied. Seems like we are in a space where companies are trying, the platforms are trying. I would love to close with a question, i will begin with the professor but i will ask for everyones input. With your work with the facebook Oversight Board, do you think this is the way forward . What is facebook doing and do you think this is enough . I will ask panelists. That was such a thorough and comprehensive description of the political aspect. My mom at thanksgiving was like, why cant facebook take down political ads . Everyone finished dinner and i was still sitting there with a full plate of food. Me, the big take away for this whole thing is that im actually more concerned about why it is that these companies are unilaterally deciding what these policies are to begin with and we have no say in it. Is it unequivocal that it is changing the Political Landscape and our democracy . It doesnt make sense why these companies are not taking into account users input. I think the concern about this has been, we can either regulate them and force them to take into account, break them up or create enough pressure to do that, or and i think this is getting traction, democratize the platforms themselves in some way or combine them together and regulate them to force them to democratize. Right now facebook is voluntarily using selfregulation perhaps in a way to stave off potential regulation. They are trying to democratize the platform and content moderation. We talked about how a piece of content comes down, you can appeal something on facebook but once it gets to an appeals level, they will either keep it up or take it down and that is all you have to say about it. There is no real direct input. In a particularly fair or just or transparent way. Traditionally, setting policies when it comes to what kind of content is hosted, traditionally there have been a number of Civil Society groups that work with these companies, they are great groups, that try to work with them to keep up good content and take down bad content. There is a lot of cronyism still. There is no other way to put it. There a lot of stuff that still comes up, still gets more attention because you are an influencer on instagram, than if schmoe trying joe to get news out about a protest in manhattan. These kinds of differences are a big deal. Answer this come about a year ago, Mark Zuckerberg in response to pressure that had been building for years announced he he has coinedhat the Oversight Board, a completely independent structurally, financially, they like to say intellectually because hopefully come aboard hopefully, board that will be endowed with money to be dispensed, we didnt know if it would be real money, we thought 30 million and now it is 130 million. To create the board, and the board is imagined to be around 40 people. It will start operating with 15 and three cochairs. , they took a team of about 12 people called the governance team, where they created this board and i have been embedded with that team watching it happen and documenting it. For the First Six Months of 2018, this was a massive global consultation. Whereeld workshops experts in speech and human rights and National Security and government and Civil Society and, to try to figure out what this board should look like from a constitutional perspective. Was, then they kind of started to make hard choices after they heard from all those people. They published a report, turned two main documents that our constitutions and a set of bylaws to run the board, complete with a amendment clauses and everything you could imagine. Process, its, that is a robust system. It involves questions of subject matter, jurisdiction, how you will create decisions, whether or not the decisions of the board will be binding, whether they will have transparent decisionmaking process, whether past decisions will be anonymous, how they determine which cases they get to choose, how you will determine how members are selected, how new members will be selected, how members will be removed if there is a problem with the code of conduct or with any type of, a thelem with behavior, ability to change their scope and hear more cases, the limited scope of what is binding on facebook right now, the only thing binding is if you appeal a decision, that piece of content as it pertains to you can be changed by the board and facebook will have to listen. If i have a picture of my puppy and it gets taken down and i appeal to the board and they reinstated, facebook has to put it back up but that is it. That is the only thing facebook has to do. The board can issue a review, a policy directive that is like a policy a Public Policy recommendation which facebook has to publicly respond to, whether they have decided to take the public see the Public Policy recommendation into account or not. That is how the board is going to work. The last piece of it is coming at the end of the month, bylaws and the code of conduct are coming out and the Board Members will be announced in mid february. It is kind of this incredibly massive, whether it works or not or ends up being good or not, it has been an incredibly granular and massive Institution Building exercise. It also, and this is to erics point about the diversity, it is unclear, it seems this is great. In a lot of ways i think maybe this chink in the armor we need to take back the internet in a democratic, accountable, transparent way, maybe this will work and we will leverage it. But sometimes i get scared that all the platforms are simply going to plug into this model, this particular group and all of a sudden we will have an enormous third party sensor and that is scary. And we are going to lose the diversity of platforms that sometimes has a really robust place for free speech online. Those are the tradeoffs. There are more than that, but those are some of them. That is what i have been following and watching happen from the last couple months. In closing, we will allow time for questions but i would like to hear them from the other panelists. Do you think this is the way forward . Is this the way to go or do we need Something Else . Spoiler, i cant predict the future. But i think we are in this moment, it almost feels like we are in this moment as if it is the early 1970s and the Church Commission just suggested, or just enacted Foreign Intelligence Surveillance Act added is like, great, we have this new court that will make this vision of surveillance, it will make them very accountable and it will be this democratic process and we decided to do this great thing to most is 2019 and i think of us would not think that it is a beacon of transparency and accountability. That is a bipartisan statement. I think we really cannot tell whether the board is going to be a Supreme Court or a fisa court. I think we cannot tell whether it it is going to serve to legitimate facebooks bottom line and internal motivation and whether it is going to be a real check. I think there is very little that can happen that will lead us one way or another. It is a longerterm question to assess whether it comes out one way or the other. Some ways in which the orchard or participating participation or framework has been great. Unconfidenti am that it will sort out its promises. The real risk is the Oversight Board will take the wind out of the sails of the alternative proposal that might be better and it will incentivize other platforms, mostly google. To create their own internal Oversight Board. That might undermine efforts to create multistakeholder mechanisms that might be better engineered to serve users and government and platforms in a more representative way. Facebook takes a remarkable tradition. It tries to have a single facebook law for the entire globe. Fundamental crux as why so many governments hate facebook. It tells them, i am not going to do your law. I am doing my law. The government is like, no, you are going to do my law. Is not going board to fix that problem. It is reinforcement that facebook thinks it is marching to its own drummer. Likely facebook will either break the internet. I am much more concerned that because every government wants to tell facebook, you will abide by our law, not your law. Lead to a demise of User Generated Content. I think it will look like thats expert i think it will look like netflix. Sometimes they will decide to go to court. Laws in thewith the countries that they operate. Respond to flags. They do not do anything to localize for the local laws. They are just waiting for the problem to come up and then they response in accordance. We see things like nets pg. Companies make a lot of choices about where they subject themselves to jurisdiction. Do they put equipment in a country so that if they have concerns about the laws of the government of pakistan and not having themselves subject to those laws, do they run into issues of being blocked in the country because the government is in tension with them. We want our laws to apply on your platform and you are telling us you will not do that. Please join me in thinking our panelists, today. [applause] we have about 10 minutes remaining. I welcome any questions. We have a microphone and we ask that you would introduce yourself. Go ahead. Thank you very much. It is very interesting from a european perspective. Things thato impressed me more than others. When it was said that the. Overnment of the platform we dont have this in europe. It has changed everything. My question is that the doctrine is performing in the kind of shell. Benow that this could counted. The second thing is hate speech. Silencing hate speech. I was an expert in fake news. Not just saying that the Public Department at derose. There is digital distrust. Thank you. We tend to have a more optimistic outlook. What about a different perspective where you have a our content in the hands of private actors. Someone said to me that the way to think about the difference between americans and how they think about government. Americans do not trust the government and think the markets will try to solve everything. A grossthat is oversimplification. I think of them as fundamental reactions that happen when we are talking about tech that kind of play out in that way. I am headed to europe tomorrow. Prevent ope has to present the board and talk about this stuff specifically. There are a ton of priors that up in builtin as an american. I try to think about things in a different way. The stuff i am writing, the way have we have been thinking about it. One example is the Oversight Board. A remedy this is just for americans. Courtaid, we can go to and get things taken down. I said, you can get things put back up, but you cannot get them taken down. That is how they were thinking about it. They already have a legal system that can do this. It had not even occur to me, frankly. When facebook oversees a board. Judgmentink about the of the European Union, it was. Entioned before obvious the reaction to abuse of power. This is now in the hands of a european court. This is my understanding. Anybody who is interested in these topics, you should be paying attention to what is going on in europe. We have seen the copyright directive was adopted. It has a mandatory filtering obligation. Conversationn desk different requirements around companys on terrorist propaganda. The European Union will be taking a look at their entire framework. Although, a structured a little bit different, it will be reopened and debated for the next two to three years with new legislation and new Legal Framework as the goal. Provenopean union has much more likely to pass legislation than the u. S. Congress. I think a lot of the Big Questions are going to be debated and likely decided in europe. Firstether the u. S. And amendment ideals have a part to play in that is a big question. I would also add the u. K. Online arms which targets unlawful content and it would force the removal of that content. We have six minutes left. If there is one more audience question, otherwise, we will conclude with our business meeting. I see nobody jumping to the microphone. With that, lets thank our panelists will more time for that excellent commentary. One more time for that excellent commentary. Thank you everyone for attending. Thank you everybody, so much. Those interested in the business meeting, please come forward. Today,ampaign 2020 news, hooley and castro announced that he is suspending his campaign for the white house. Here is the video he released of his Campaign Announcement on twitter. I am a candidate for president of the United States of america. Very proud that i can say that. It shows the progress we have made in this country. My grandmother came here when she was seven years old as an immigrant from mexico. Two generations later, one of her grandsons is serving the United States congress and the other is running for president of United States. [indiscernible] watching that image is heartbreaking. Us all off. So piss my plan includes get rid of a section of the immigration act. To go back to the way we used to treat this, but some but he comes across the border, not to come allies desperation. This president s caging kids on the border and letting isis prisoners run free. After he murdered nine people worshiping at bible study, dont brown hat michael what about eric garner . What about davon clark . What about all of those young men and women whose lives were lost because of Police Violence . They deserve justice, too. No matter who you are, no matter what you look like, no matter the color of your skin, you ought to be treated the same under a justice system. I am the only candidate with a plan on police reform. I have not been afraid to stand down and speak the truth. And speak up for people who are forgotten. , we canl Work Together build a nation more prosperous not only for those who are already doing well, but for everybody else. It is time for the democratic we doto change the way our president ial nominee process. I am proud of the campaign we have run together. We have shaped the conversation on so many important issues. Vulnerabler the most , and given a voice to those forgotten. The circumstances of this campaign season, i have determined that it simply is not our time. Today it is with a heavy heart and with profound gratitude that i will suspend my campaign for president. I am thankful to all of our supporters, to those who knocked on doors, made phone calls, donated, or told their friends and family about our vision to put people first. I am not done fighting. I will keep working toward a nation where everyone counts. A nation where everyone can get a job, good health care, and a decent place to live. So our children can walk across the graduation stage. So kids can walk in their neighborhood in peace. For all who have been inspired by our campaign, especially our young people, keep reaching for your dreams and keep fighting for what you believe in. Campaign 2020. Watch our coverage of the candidates on the campaign trail and make up your own mind. As the voting begins, watch our live coverage of the caucuses on monday, february 3. Cspans campaign 2020. Your unfiltered view of politics. 2020 president ial candidate, Elizabeth Warner spoke in boston on the First Anniversary of the hermotion formation of run for president

© 2025 Vimarsana

comparemela.com © 2020. All Rights Reserved.