comparemela.com

In recent years, domestic terrorism and specifically white supremacist conspiracy related and antigovernment violence has become one of our nations greatest Homeland Security threats. Last october the Committee Held a hearing to examine the role that social media platforms played in the amplification of domestic extremists content and how the content can translate into realworld violence. We heard from expert witnesses, who discussed how recommendation algorithms and targeting and other amplification tools push extreme content to users because that type of content is what keeps people active on the platforms. Unfortunately because these platforms are designed to push the most engaging, post to more users, the end up amplifying extremist, dangerous and radicalizing content. This includes qanon, stop the steal and other conspiracy theories, as well as white supremacist and antisomatic rhetoric. In some cases the content, may not necessarily violate a companys community guidelines, and other cases even content that is in clear violation of Company Policy remains on the platforms. It is often only removed after public pressure. In both places cases it doesnt significant harm to our society and stokes realworld violence. Weve seen this happen time and time again from the 2017 neonazi unite the right rally in charlottesville that was used organizing a Facebook Page to the violent attack on the u. S. Capitol spread by stop the steal content that repeatedly surfaced online, to the shooter who is livestreamed who livestreamed as he massacred a buffalo supermarket. There is a clear connection between online content and offline violence. We have heard many explanations from social Media Companies about their content moderation policies, efforts to boost trust and safety and actions taken to remove harmful accounts. Theres no question that those efforts are important but there is a question of whether those actions are enough to effectively address the spread of dangerous content online and the resulting threats it poses a two hour Homeland Security. The central question is not what content deplatforms can take down was it is posted but how they design their products in a way that used the content in the first place and whether they build those products with safety in mind to effectively address how harmful content spreads. That is the focus of todays hearings. We will have the hearing the opportunity dear from two panels of witnesses, outside experts including former facebook and twitter executives as well as the current Senior Executives from auto meta, youtube, tiktok and twitter, or charged with designing social Media Products used by billions all across the world. The overwhelming majority of social media users have very Little Information about why they see certain recommended content there is very limited transparency into how social media balance of their Business Decisions with the need for online safety. Including what resources they invest to limiting the spread of hardball harmful content. Our goal is to understand how Companies Business models and incentive structures including revenue generation, both an Employee Compensation determine how social Media Products are built. The extent to which current incentives contribute to the amplification of content that threatens Homeland Security are you for nearly a year, we have been pressing meta, talk and youtube for more information on their policies to monitor tiktok and youtube for more information on their policies to monitor, as well is the relationship between their recommendation algorithms and targeted advertising tools that generate, much of the companys revenues and the amplification of extremist content. The companys response to those inquiries have been incomplete and insufficient so far. This morning we will hear from two former executives and a Technology Journalist with social media expertise about the internal Product Development process and the Business Decisions that these companies make, including tradeoffs between revenues and growth and their trust and safety efforts, as well as how they interact with foreign governments. Later this afternoon we will hear directly from the chief product officers of meta, youtube and twitter and the chief operating officer of tiktok. The executives charged of making those Business Decisions and driving the Strategic Vision of the companys. I look forward to a productive discussion with both panelists, welcome to this committee, we look forward to your testimony. Member portman your recognize for your opening comments. I would like to think the experts for being here. It will be an interesting hearing. This past sunday we observed the 21st anniversary of the tragic 9 11 terrorist attack. And to conflict with user safety, whistleblower testimony has revealed in numerous occasions the users of social Media Companies are aware that certain features create threats to users. It is unfortunate that the American Public must wait for whistleblower exposures to find out about ways in which platforms are knowingly and unknowingly harming their users. The lack of transparency in the develop med process, algorithms and, statistics create an asymmetric environment in which the platforms know all yet the users and policymakers and the public know very little. One consequences one consequences related to china. I have concerns that the chinese coming his party has access to tiktok data on american users come over 100 million americans under the age of 19 who use tiktok. Tiktok data remains vulnerable to the communist party of china that as of the ccp tries to exploit its access to u. S. Data and insert u. S. Content over what users see. Despite moving data servers to the u. S. Tiktok and by dance employees retain the ability to access this data, that is not true we would like to hear about that today. Also we learned yesterday from senator grassleys Opening Statement with the twitter whistleblower, twitter failed to have americas data accessed by foreign governments. They spoke about how several work for agents of india, china and saudi arabia which is concerning and speaks to our congress, we need more information from platforms and how they secure user data. Another consequence of protransparency relates to content moderation. I recognize content moderation is a key component to trading say platforms for users, it cannot be the only thing. Transparency reports released by Companies Often detail the amount of content removed for violating Company Policy. The support does not account for violating content that is left up to the platform and left up on the platform and goes undetected. It also doesnt account for content that is incorrectly censored, as we see with many conservative voices on social media. I, like many of my colleagues have been critical of the political biases held by big tech platforms which resulted in systematic takedowns of accounts that hold ideologies with which the left and liberal media disagree. We will hear about that today. These takedowns are under the guise of combating misinformation, which in fact, they are really just combating conservative viewpoints that conflict with their own. Any steps taken to address the impact of social media on Homeland Security, must safeguard free speech. For us to have a responsible conversation of harmful on content on american users we must talk about how current transparency efforts have worked and have not. Congress must enact legislation that will require companies to share data, so that research can be done to evaluate of how harms from social media impact americans. I have been working on legislation along those lines, was senator coons to establish bipartisan legislation. Platform accountability would allow the largest platforms to share data with vetted, independent resources researchers so we can all increase our understanding of the inner workings of social Media Companies and regulate the Industry Based on good information, that we simply do not have now, that we can learn through this process. I would think the witnesses for being here. I look forward to having your expertise help guide us in these complicated issues and thank you mr. Chairman for holding this hearing. Thank you. It is practice of this committee to swear in witnesses, so if each of you would please stand and raise your right hands. Do you swear that the testimony you will give before this committee will be the truth, the whole truth and nothing but the truth so help you god . You may be seated. His first witness is alex, former c Vice President of twitter. He helped grow twitters monthly active users to over 300 million and build the network from over zero Network Revenue to 2. 5 billion a year. He also spent six years google on a variety of projects including tilting the Worlds Largest computational building the Worlds Largest computational platform. He was in the room for major decisions about products at twitter and is familiar with the priorities that were weighed as products were created, as well as how those products are built. Welcome to the committee, you may proceed with your opening remarks. Mr. Roetter good morning, mr. Chairman, members of the committee. Thank you for inviting me here today. We live in a world where an unprecedented amount of people consume information from social networks. Viral content and misinformation can propagate on these platforms on a scale thats unseen in human history. Regulators must understand companies incentives, culture and processes to appreciate how unlikely voluntary reform is. In over 20 years of working in Silicon Valley as an engineer, an executive, ive seen firsthand how several of these companies will work. Today ill talk how these Companies Operate and actionable ways to demand transparency. The Product Development lifecycle works as follows. First, teams of product managers, engineers, and designers are assigned specific metrics to maximize. These metrics carefully track User Engagement and growth as well as revenue and financial indicators. Other metrics, such as user safety, are either not present or much less important. Second, teams use an experimental system to launch changes to small percentages of users. The effect of every experiment of key metrics is measured extremely accurately. Absent are detailed metrics, tracking impacts on user safety. For example, i never once saw a measurement such as, did it give an experiment increase or decrease the spread of content later identified as hate speech . And third, executives review these experimental dashboards regularly and make decisions on which experiments to launch. These reviews are run by products in engineering. Legal and trust and safety are absent or do not play a substantial role. Culturally, these companies are in formal hierarchies by the builders, engineers, product managers held in the highest regard. Other functions are viewed much more skeptically. The strong bias is to make sure that corporate bureaucracy does not slow down Product Development. These companies conduct regular performance evaluations and promotions and these drive peer recognition, career advancement, cash and stock awards. The main Data Collected is what impact an individuals work has on key metric families. Only a minority of builders get promoted based on impact to trust and safety metrics as theyre not valued highly. What data has been shared to date is mostly nonilluminating statistics designed to create the appearance for their taking the problem seriously. One of the Largest Companies in the world says it is spending what seems like a large absolute number, that number must be put in context and compared to the size of other initiatives. For example, product efforts or how much they spend on stock buybacks. Large investments amounts are not sufficient. We must demand transparency based on measuring actual results. Similarly, when a Company Points to how much content it is taken down, it has to be understood into terms of the reach in the network. For real transparency, i recommend assembling a group of independent researchers and data scientists, task them with enumerating the right questions to ask, and the set of data they need to answer them. Fund them to continually do this work and refine their questions and data requests. The government is able to demand transparency in technically demanding fields. Third Party Auditors of Public Company Financial Statements are able to balance the publics need for reliable Financial Statements with a companys need to keep information confidential. Until such transparency exists, every assurance by any of these companies has to be taken on faith. Transparency is necessary but not sufficient. Until we change the fact that User Attention and profits are what Companies Care about above all else, all the data sharing in the world will not address the problem. Policy and legal experts have previously testified before the committee on ways that incentives could be changed. Incentives matter. Companies behave differently when they care about the quality of content. For example, having inappropriate ads could materially harm Financial Performance so most advertising systems place ad copy removal as a step that has to occur before the new ad ever makes its way to users. User generated content is allowed to go live instantly. Incentives shape companys algorithms. Tiktok and bytedance feed young people in china a diet of educational science and math content via their recommendation algorithm. The chinese version of the app even enforces a daily usage limit. Contrast this how u. S. Companies target content to young americans, optimizing their engagement and revenue at any cost. Any suggestion for a more useful transparency will be met with many objections. The status quo is simply too lucrative. Do not underestimate these companies ability to fight request for information. The legal team at google has the same number of lawyers as all the employees at the f. T. C. Given what we know about companies processes, culture, we should not expect progress voluntarily. We should view their commitments extremely skeptically. However, with the proper transparency and regulatory environment, i believe we can change their incentives and start to see real measurable progress against these problems. Thank you. Senator peters thank you. Our next witness is brian boland, partner engineering marketing strategic operations and analytics at facebook. He worked at facebook for 11 years. He worked in several roles including leading a 500person Multifunction Team focused on product strategy, partner engineering, operations, analytics and marketing. These high impact teams worked across facebook products and features including watch, video, news, group admins. Before joining facebook, he worked at microsoft and other tech companies. Mr. Boland, you may proceed with your opening remarks. Mr. Boland goodmorning, mr. Chairmen, thank you for holding these hearings that cover such important issues for our nation and the world. And thank you for inviting me here today to provide testimony on my experiences as a Senior Executive at facebook, now known as meta. For the last few years ive grown increasingly concerned about the roles that facebook, instagram, youtube, twitter, and tiktok play in driving the growth of misinformation, extremism and generally harmful content. I worked at facebook for 11 years in a variety of leadership roles. Helping shape product and Market Strategies for a broad array of products including advertising, news, video, media, more. During my tenure at the company i worked with the most Senior Executives and i was deeply embedded in the Product Development process. In my last two years at the company, crowd tangle is a tool that provides limited albeit albeit limited but industry leading content on facebook. What finally convinced me it was time to leave was that despite growing evidence that the news feed may be causing harm globally, the focus on and investments in safety remained small and siloed. The documents released by francis, the Facebook Whistleblower who last fall testified here highlighted issues around polarization globally to lead people down a path to more extreme beliefs. These papers demonstrate thoughtful, well reverend documentation of the harms that concerned me. The research was done by highly skilled facebook employees who are experts in their field and was extensive. And rather than address a serious issue raised by its own research, meta leadership chooses growing the company over keeping people safe. While the company has made investments in safety, these investments are small and routinely abandoned if they dont Impact Company growth. My experience at facebook was that rather than seeking to research and discover issues on the platform before they found them, before others found them, theyd rather reactively work to mitigate the p. R. Damage for issues that came to light. I came to believe that several circumstances put americans at risk for putting these contents on the platform. The first is growth over safety over incentive structure that leads to products designed and built without a primary focus on safety. The next is the unprecedented lack of transparency available from these platforms so that we can analyze content and understand the impacts from these tools. Finally, the lack of clear oversight for the Business Practices of these companies. We face challenges like this before with new technologies. In the 1960s, congress addressed the dramatic rise in fatalities caused on automobile use in the United States. That industry experienced explosive growth and the companies focused on growth and sales and it turns out safety didnt sell. The creation of the nhtsa at the time the nhsb empowered the agency to study data and take steps to make driving in america rapidly and significantly safer. Today, automobiles they safety is a selling point. There is extremely limited ability to track these platforms and no ability to depart our future and create a version of trash crush taking crash testing the car. In the 1950s we had no way of knowing the deaths happen by cars and we had no way he was increasing rapidly. That lack of data is where we are today with social media platforms. The reality is for all the debate about whether social media is predominantly good or bad is that we dont really know. If anyone tells you they know, they dont know. I believe we have a right to know, the good news is with the right incentives in place and rolls around transparency we can develop a better understanding of these issues rules around transparency we can develop better understandings of getting the loss. We can empower agencies and research, to deeply understand the issues and with better development, build a path to a future where we still get the amazing benefits from these products, while mitigating the harms we barely understand today. Today hope to shed light on a development process, internal and external structures for the organization and the critical importance of transparency. I appreciate your work to better understanding these issues and deliver realworld solutions to the american people. Thank you. Sen. Peters our final witness the senior fellow at lincoln network. He is an awardwinning Foreign Correspondent and author, his work has taken them to the worlds most authoritarian and remote places from inside of north korea to the try Siberian Railway transSiberian Railway, to experiments and technological surveillance in china. He has served as a Tech Congress follow with the House Foreign Affairs Committee Minority and supported a range of issues including china, sanctions and investigative work. Welcome to our committee, you may proceed with your opening comments. Mr. Cain good morning, chairman peters, Ranking Member portman and members of the committee. It is and honored to be invited to testify on social medias impact on National Security. Today well talk about one of the greatest technological threats facing our Homeland Security and democracy, tiktok, the social media app the reports to nefarious Chinese Company called bytdance. As an investigative journalist in china and east asia, i have been detained, harassed and threatened to for my reporting on Chinese Technology companies. I will show you how tiktok has orchestrated a campaign of distraction and deflection to mask the alarming truth. Americans face the grave and unprecedented threat of software in our pockets that contains powerful surveillance and data gathering capabilities. Owned by private companies and must comply with the dictates of a foreign authoritarian government rolled under the chinese coming is party. The ccp has signaled its ambition to exert global jurisdiction to private companies everywhere as a condition for doing business in china. At tiktok, its a disaster waiting to happen, for our security and privacy of our citizens. We will have tiktok executives here later. According to their internal Public Relations guidelines, leaked to the media, they are required to downplay the parent company, downplay the China Association and downplay ai. The Public Relations guidelines states if you ask them about the influence of the Chinese Company bytedance and its influence over its american product tiktok, which is used by many generations of teenagers, executives must tell you that bytedance is a separate company in china and that you should talk to them instead. They will attempt to confuse the claiming tiktok takes a localized approach hiring local moderators implementing local policies and show local content. They will not tell you about an individual named the master admin in beijing. This has been linked to the media who has had access to all americans data. They also, will not tell you that they are tiktok report to bytedance executives in china. And bytedance reports to the Chinese Communist party. Tiktoks expansion into the American Market was possible because china rig to the market. Chinese government offered bytedance protection, while banning competing social media apps, facebook, instagram, google. Like all Chinese Government companies bytedance, it runs an inhouse communist committee that enforces the political loyalty of its employees. In 2018, bytedance and tiktoks founder, wrote a letter promising chinese regulators that his company would follow core socialist values introduce correct values into products and in what it and would ensure it would promote the Chinese Communist agenda. It would strengthen the work of the parties instruction, deepening cooperation with the official party media and strengthening content review in line with the party values. Bytedances public statements and china should be cause for alarm, considering American Government employees, military personnel and workers use talk. Tiktok. I was an investigative journalist and chinas Western Region of xinjiang, and i was writing my book about the chinese is surveillance dystopia. I learned that bytedance and ti ktok was expanding into america and i knew this was ominous because i had been speaking to a former worker for the ministry of state security, a major intelligence and extremely powerful intelligence body, who hadnt told me that that he had worked with Numerous Companies including bytedance to expose the data over minorities in china. It happened. Bytedance has also had an active role in suppressing the news about atrocities, which included physical and psychological torture, concentration camps, forced sterilizations, and the destruction of mosques and cultural artifacts. This is serious. I am aware of time. I have much more their written testimony few would like to ask. Thank you for your time today. I look for to answering your questions. Sen. Peters thank you. Extremist groups including qanon followers isis, and white supremacist certainly have expanded their ranks by recruiting individuals on major social media platforms. The christ the church sugar who killed 51 people and inspired the el paso shooters was radicalized on youtube and livestreameds attacks on facebook to rally others do is because. Three years later, a shooter in buffalo, new york streamed his attack on twitch, which acted quickly to take it down, the video was soon circulating widely on facebook. All this committee, why do these platforms recommendation algorithms spread this extremist content so rapidly. Mr. Roetter way to understand these recommendations they dont have intentionality about specific types of content. But the way they work as they assemble information, they model everything about you, your usage, your interest your geography, here you who you are connected to him they model the content and try to match those. What makes this so dangerous as there is a positive feedback loop. If you pick something noncontroversial, pick a hobby, knitting, if i think youre somewhat into knitting, i may recommend knitting content to you because i believe you will engage with it. You do engage with it, that makes you interested in knitting, because youre doing the hobby more, but it feeds back into the algorithm which has signaled that you do like more of this content. The next day or the next session, its more confident that you will engage in the content and you will go further down the rabbit hole. With knitting that is fine. But that is true for all sorts of content. Because of this feedback loop, few have some proclivity some interest in some topic, youll be fed more of that, that feeds your interest, and your fed more. That is why we see people that start off with more things in common than differences, splitting and fracturing, as each go into their world that are more different and have less in common with other people. This is an inevitable consequence of driving engagement. [indiscernible] sen. Peters our company is able to change some of those algorithms to prevent that from happening . How would that work . Mr. Roetter certainly could and theory but it will never happen to give in the structure. They are incentive to maximize profit and for they have profits they are intended to show massive user growth to convince investors that they will be massively profitable in the future. The way they do that is getting people to come back to their platform over and over. The way they do that is for optimizing engagement. As as long as the algorithms are showing you things you will engage with, we will always have this positive feedback property. I will show you something youre interested in, you get more interested in it, you are likely to engage with it. You can build an ai to train for anything. But you pick an incentive based on the overarching incentive of the environment you find yourself in. In this case, Public Company for shareholders. Until those incentives change, we should not expect the ai to optimize for anything other than profit maximization. Sen. Peters we will hear later this afternoon from the chief product officers from some of these major companies. Is it possible for them to set different priorities for Product Development to address the spread of extremist content . Is that within their purview . Is that something they should be able to talk about . Mr. Roetter in practice it is not possible. The reason is these are just individuals. This is not a matter of a few bad eggs running companies, this is a System People find themselves in. They are in a system where they have to report user growth, engagement, attention from the users and profit. I should add, this game is a race to the bottom. If i build a product that is less addictive than a competitors product, by definition the User Attention will go to the competitor so i have two make my product more addictive to pull people back or i will be an abandoned by investors. Given the structure there is no way a product leader or any other executive could optimize for anything other than those core metrics, engagement and revenue, because that is the system they find themselves in. Sen. Peters it has to be broader than that but they are the front end or beginning of that to understand exactly how that incentive structure or priorities shape the work they do . They can talk about that, i suspect . Mr. Roetter theyre doing exactly what you would expect them to do, given the environment they find themselves in. As long as the incentives for the companies are the way they are, they will behave the way they are behaving. If it didnt it would hurt the trajectory of the country. Sen. Peters where the actions taken by trusted security teams at these platforms not enough to deal with this problem. We will hear a lot about these teams from this afternoon, why is it not enough . Mr. Boland imagine there will it is true they make some investments, the important thing to think about is if it is siloed from the rest of the process, if it is lacking safeguard or a group that is not a core part of the way people build products it will always be an afterthought, it always there always be a team theyll have to fight in the battle of tradeoffs between their ways that they would like to improve trust and safety and impacts to growth and impacts the growth translates to impact revenue. That dual tension of not receiving enough resources and being at odds with the Product Development teams and process, makes the team have to fight for any sort of interventions they want to put in place. You can change the way that the product teams could work with those organizations. A good example would be the efforts that facebook is now putting in place around privacy. For a number of years, we know that facebook and meta, has not been at the top of their game for privacy. After the last issues with the ftc, the company has invested in privacy and it is something that teams have to care about. As long as the team is interested in safety, is off to the side, if the incentives are not in place that state of the company, you have to make sure that when people are making daytoday decisions, they are prioritizing these efforts, that team is fighting a losing battle both in resources because they are battling against a number of people and in incentives because they have to justify every single change after make. It is not a core part of how teams think. Sen. Peters at best, they will. Be followed up. The products will get launched. They will potentially cause problems, that team may have recognized but they were not able to interject that effectively during the Product Development phase. Later they may be engaged, but at that point, the genie is out of the bottle, before they can get engaged. Mr. Boland thats correct. You can understand this if you think about where these companies started and the short period of time they have grown to be as successful as they are. They still feel like startups in the way that the leaders think, even though these are some of the Biggest Companies in the world. At the beginning of the lifecycle it was about figuring out products that will grow effectively in the world. It has gone to a point where theyre not matured out of that stage. I think we can get to a point where these companies could do more in that space we just havent seen them make that transition to be more sponsor before the activities. Sen. Peters thank you,. Will keep within the time and i want to focus on tiktok. Weve talked about tiktok being the most popular social media app in the u. S. I also think it poses a risk to our National Security and i want to dig deeper with both of these panels. My understanding is under chinese law, the Chinese Communist party can access data of Companies Run out of china or have parent Companies Run out of china, both bytedance and tiktok have offices in beijing. Under chinese law, does tiktok have a legal obligation to give u. S. A user data to the chinese, is party. Mr. Cain absolutely. Tiktok executives will face a minimum of 20 days detention if they refused to turn over data on anyone in the world. This could be anyone in china, anyone traveling through china, through hong kong. This is a documented legal situation and it is not something that tiktok, despite claiming to be an American Company can avoid. I would like to point out that tiktok does dodge this question, by trying to point out that it is run by a Cayman Islands holding company, a shell company. This is a red herring to distract from the issue at hand. So, the American Company, tiktok, and the Chinese Company bytedance both report to this Cayman Islands shell company. The company has never said how many people work for the shell company, but we do know that the ceo of bytedance and the ceo of tiktok are the same person. This is listed on the Cayman Islands registry. The ceo is the same person running the Bytedance Company in china. Sen. Portman we will hear from tiktok later and based on the testimony we received an advanced, they are going to say they have not provided data to the chinese coming is party, even if the ccp requested data. They said they were not shared with them. Again, does china need to make a request to access this data orders china have the capability to access it at will . Mr. Cain i am not aware of the Chinese Government having the ability to simply open a computer and access it at will. It will happen through somebody in bytedance or tiktok. This has already been demonstrated has already been demonstrated. There was a buzzfeed report that revealed 20 leaked audio files in which tiktok employees said they had to go through chinese employees to understand how american data was being shared. It pointed out that employees were saying that there is an individual in beijing who is called the master administrator, we dont know who that is yet but this person had access to all data in the tiktok system. When they say that this data is being kept separate, this is its simply a point that is disproven because we have documentation that shows the data has been shared extensively. S we will sen. Portman we will have a chance to talk to tiktok about this but i appreciate your testimony today. We have calls to legislate more in this area. Regulations, legislation. My concern is we dont know what is behind the curtain of the black box. Weve proposed this legislation in the Transparency Act to require platforms to share data. With vetted and independent researchers. We know what is happening to user privacy, part development. We talked about the bias i believe is out there in social media today. Many of the companies in other practices. You talked about this in your testimony, i see you said solve the problem of transparency, we must require platforms to move beyond the black box, with the transparency and account ability act. Can you explain why that legislation is needed . Mr. Boland i believe the platform accountability and Transparency Act is one of the most import pieces of legislation before you all. We have to address the incentives we have been talking about. To begin with, we are at a point where we are supposed to trust with the companies are telling us. The companies are telling us very little. Facebook, to their credit is telling us the most. But its a grade of a d, out of an a through f rating system. In order to understand the issues that we are concerned about with hate to speech in the way that these algorithms can influence people, we need to have a public understanding and accountability of what happens on these platforms. There are two parts of transparency that are important. One is understanding what happens with moderation. What are the active decisions users are these companies are taking around content . Theres another important part, around what are the decisions the algorithm is billed by these Companies Taking built by these Companies Taking to disturbing content to these people . If you have Companies Reporting to what they would like to, you will hear a lot of averages, lot of numbers that gloss over the concern, if you look at averages across these large populations you missed the story. If you think about 220 million americans who are on facebook, if 1 of them is receiving an extremely hate filled feed or radicalizing feed, that is over 2 Million People who are receiving problematic content. In the types of data you are hearing today, you get an average, which is incredibly unhelpful. By empowering researchers to help us understand the problem, we can do a couple of things. We can help the platforms, because today theyre making the decisions on their own. These are decisions that should be influenced by the public. Two, you can bring additional account ability through an organization that has clear oversight over these platforms whether that be through new rules or fines, you have the ability to understand how to direct them. Today, you dont know what is happening in the platforms. Yet the trust the companies. I lost my trust with the companies of what they were doing and what metawas doing meta was doing. Should help our researchers understand the platforms better. Sen. Portman you disagree of the need for more transparency . Mr. Cain i would hundred. Mr. Roetter i think this committee is poised, given its powers to enforce transparency. Sen. Portman additional questions later. I want to respect the time but i appreciate your testimony. Sen. Peters thank you. Senator johnson you are recognized for your questions. Sen. Johnson think we all agree this is a big problem there is no easy solution. I would meet with facebook and i appreciate what theyre trying to do to pull down islamic terror content i think we agree, we dont want to be encouraging violent and behavior on these platforms we need to check free speech as well. Its a balancing act. I think, mr. Boland, you talked about the term extremism and harmful content. I guess that is in the eye of the beholder. Its difficult to define it. What i want to focus on is, where do we draw the line in terms of taking down content that we all would agree is extreme and could induce violence, versus censoring legitimate political debate . Do you have any idea what percentage of twitter employees are conservative versus liberal . Have no idea. Sen. Johnson you think it would be tilted to the left . Mr. Boland, would you want to answer that question . Mr. Boland i just dont know. Mr. Cain its obvious, my only kn on. Let me use an example we are all aware of. Laptop. Do you believe that the Washington Post that there is authentic information on the laptop . Mr. Roetter im not sure. I will say these are massive platforms. Sen. Johnson ive got very little time. Mr. Boland, you assume that is authentic information . Mr. Boland i dont have an opinion on the laptop. Sen. Johnson twitter was actually very effective when they block the New York Post articles on the biden laptop. We had jack dorsey in front of the Congress Committee in october, 2020. Senator cruz and myself asked him, we were talking about russians using the platforms to impact our elections and everybody agrees that can happen. We asked mr. Dorsey, do you believe twitter could impact the election . Mr. Dorsey said no. We asked him, could twitter have the impact of the election . Its so hard that theyre not impacting. Mr. Cain absolutely. Sen. Johnson there is a problem there. I appreciate you acknowledging that fact. We had 51 former intelligence officials have no idea on what basis they wrote this letter that came out immediately. It might be because the fbi had a scheme lay august of 2020 to downplay the information on the laptop. They came out and said that the laptop had all of the earmarks of a russian information operation. It seems to me that the letter was an information operation. So, we have this platform that censored that, facebook throttled it back. A Company Called Media Research center poll, after the elections, 750 voters in seven swing states of biden voters who were unaware of the emails, text, testimonies, making transactions on the laptop as well as senator grassley in my report, based on interviews on u. S. Persons and documents. Those biden voters would say they would so vote for them but 60 said they would not 16 said they were not. 4 would vote for the third party. 5 would not have voted at all. Pretty strong evidence. What facebook and twitter did impacted the 2020 election to a far greater extent than anything russia ever couldve hoped to do in 2016 or 2020. I want to talk about other disinformation coming out of this committee. The day or two after senator grassley and i issued our report, based on u. S. Documents, interviews with u. S. Persons, our Committee Chairman was a Ranking Member, issued a press release saying peters response or public effort to amplify russian disinformation. He said i generated a partisan report that is rooted in russian disinformation. You want to attract that false allegation now do want to retract the false allegation . Now that we know the biden laptop is accurate, theres not been one information provided in my report that has ever been refuted. And you as Ranking Member of the committee accused me, repeatedly of soliciting russian disinformation . Do you want to retract your false allegation . Sen. Peters no. Lets focus on what sen. Johnson this is exactly the type of harm we can do to our political process when you have these Big Tech Companies engaging in political debates, censoring one side of the political spectrum and amplifying the false allegations of another side. Does anyone want to dispute that . Sen. Peters i think its important that we get the data to know. This is why the act is so critical to our globe and nation. If youre able to look at the data to understand what happen from content moderation and you are able to see the distribution, you could compare that data across the platforms and see what impact it has. Sen. Johnson let me just, one part of transparency would have people who work or used to work for these platforms to at least acknowledge the highly political Nature Nature of the individuals who work for them. Its obvious to everyone. Mr. Zuckerberg spent half 1 billion impacting the 2020 election, took over the green bay election system in highly partisan fashion, 90 of the money he spent were for democrats in wisconsin. There was enormous political activity within these social Media Companies. Lets be honest. Lets be transparent. Lets be honest in our transparency. I experience outside of whether someone had a political leaning are not, i did see political leanings shape a decision made inside the company, per my experience. Sen. Johnson i saw it in the New York Post article. I think its pretty obvious. Sen. Peters youre recognized for your questions. Sen. Lankford you spent a lot of authoritarian regimes and how social media can control their own populations. Chinese government is doing this, and what they have done, in that case you have studied this, one of the features in several platforms as the permissions, when you join it you use a discrete platform and they give you the user gives those bytedance the ability to use their microphone, how was that information using a third terrier in an author taurean disease how is that data using an authoritarian regime . May involve espionage, government officials, this is a major, major trojan horse that needs to be dealt with in the Chinese Government has made clear in his National Artificial intelligence strategies that it does need data data is its biggest target. Sen. Lankford one of the things i have seen from tiktok is the ability to keep up with keystrokes. If you use their apps to go to other websites, they could track your keystrokes including credit cards, user ids, factual or not factual . Mr. Cain youre absolutely correct. Sen. Lankford they made the statement that, we dont use that for other purposes we just maintain that. That is now owned by the Chinese Government. If its going through tiktok, they have access to get users facial recognition, passwords. That is a building of a database system. This is not some hypothetical possible thing. This is actually occurring . Mr. Cain precisely. This is occurring. The data that tiktok gathers, there was recent study by citizen lab which does work on this, which found that tiktok does gather unusually large amount of data from its users. The key logging Software Found recently and reported on, tiktok has said they dont use this, but it is there. If the chinese, his party wants access to it, they have the power to do it. Sen. Lankford thank you. I want to ask you a bit about the value system. You have a unique perspective on what are. Twitter. But yet, the leaders of the countries are allowed to be on twitter and are able to put out authoritarian propaganda. There still allowed to do that. Twitters value system seems to shift from country to country based on that country, even if that country blocks them from a platform, they are still allowed to put out the propaganda from the platform. Am i wrong . What have you seen . Mr. Roetter twitter is obligated to follow the laws of the countries it is operating in. Sen. Lankford there also seems to be a patchwork of values in these countries. In our country we say we stay strong for this principle, but in another country they dont. Mr. Roetter thats a fair characterization. Sen. Lankford is that a problem longterm . Or giving authoritarian regimes a platform through that is just a matter of having customers, even if those customers block the use of twitter and the country . Mr. Roetter the bigger problem is the consequences of the overwhelming model. That is a consequence of trying to get everyone to use the platform and being subject to some constraints, whether it is local governments or some other constraints. The broader problem is the consequences of who sees what content and what that does to people in the real world, as a function of the incentive structure that is created for these companies. Sen. Lankford recent testimony on twitter has come out that they have had on their team chinese spies, individuals that work for the Indian Government, that work for the saudi government, that were on the staff and were funneling information back to those authoritarian regimes. I would assume twitter has a process of going through and venting their employees. I am making the assumption that while you are there so the vetting of how this happened. How are they vetting their employees and able to evaluate individual so they do not end up with chinese spies or from the Indian Government . Mr. Roetter it may have changed. Nothing, there is background checks but there is nothing i saw that made me think that process was designed to counteract a threat model of governments inserting spies. It was much more pedestrian of a process than that. Sen. Lankford wooded twitter it seemed wooded twitter, seems to be the in and government, we are requiring the Indian Government, we are requiring that doesnt seem to be a venting issue that seems to be a requirement if youre in our country, we require backdoor access basically. Mr. Roetter im not sure im familiar with the rules from saudi arabia in terms of operating. Sen. Lankford this is an issue. We will have an opportunity to talk about this. Mr. Boland, you spoke out about the algorithms out there and dealing with how the platforms seem to be engaging angry comments. The angry the more angry you become it is easier for the algorithm to place this. I havent made this recommendation the facebook for years to say, why couldnt the page owner in that sense, be able to take the comments that people want to make comments, those comments come to the page itself, but other individuals cannot see it . There is an option you could create to say to turn off public viewing. If you want to make comments to me you can do that, but it prevents people that make comments on my page from attacking other people that make comments on the page. What facebook has created is a place for people to scream at each other. The political dialogue is hostile and it reinforces the angry comments to a be able to drive that. What im describing, given the owner of a page, the option to make the comments between myself and those making comments so they cannot attack each other. Is that possible to be able to do . Mr. Boland an orange step an important step is back to the transparency point is if we can have research and academics involved. That could move us a lot faster. Sen. Lankford it seems to be that we want that engagement across and that anger, because that keeps people engaged, according to facebook. Instead of lowering the temperature on this page, it seems to be high temperatures in as many places as possible. The anger responses seem to build in that algorithm to continue to accelerate coming back to that page over and over again. The algorithm just knows if it gets results or an engagement. Without there being a qualitative view over the type of content, the algorithm will chase what they are told to chase and they are really good at it. They are really good at going after the metrics they are given. With machine learning, you will see more of that. The idea that it chases that kind of content, that is what gets a reaction and that is what will grow. Senator romney, youre are recognized for your questions. Sen. Romney thank you. Mr. Roetter, your comments about the incentives of a corporation are accurate, which is trying to make as much money as they can. Every newspaper and magazine, tv show, broadcaster, Radio Station does the same thing. How can we get more eyeballs and what gets more attention . What we are seeing with social media is not entirely unprecedented made it wasnt around during the early days of broadcast, but i presume it was the wild west initially. There was the threat of Government Intervention of what they could advertise and who could advertise and the industry came together and said, we are going to start establishing rules. I understand the government also established the entity for the rules of broadcasting and the same thing with certain hours of the day, how many sections of violence could be broadcast on networks. We havent done that. With regards to social media. Social media is far more engaging and captivating of our young people, as well as many adults. What we need to decide is whether the industry should come together and talk about its own decisionmaking to draw the lines and say these are the things we have all agreed to, and if the industry does not do that, whether the government should. If we should establish an agency that says these are the rules and you are going to have to follow them. If the industry is willing to take on that measure, should it, and if not, should the government . The best predictor is past behavior. The industry will share information, which is not the information you would share if you were genuinely interested in providing transparency. Brian stopped and talked about sharing average when you have a distribution which is so nonuniform. We see the exact numbers being shared. This is a two step process. First thing is we do not know what is happening on the networks today, and that is why the conversations around networks revolve around cherry picking examples. You can also prove that they have that body from the other direction, depending on which example you cherry pick. We need unbiased raw data that can be processed and then we understand what they are doing. We have a better view of what they were doing in a representative way, then we could talk about do we believe there incentives will create the right outcomes and what is the true impact of that . Then faced with the shared understanding of what the networks are doing, it is possible that the companies would come together and decide to self regulate because the prospect of someone else regulating them is worse. But we demand transparency so we have a shared view on what could happen there. Mr. Boland usually you would see Industries Take selfregulatory steps and it feels like external pressures and legislations are pending, and i do not feel that they are feeling that with the United States today. What is particularly terrifying is we do not know what is happening on these platforms today. You knew with broadcast. Everyone could see what was being broadcast. With the way that we work today, you have such a distribution of content that you can look on an average basis that things are getting better. The industry can tell you that here is the average of what we are doing, but for a person who has a 99 feed or a 99 of the most extremist feed, they may see an increase in the types of harmful content that we may never know. In an era where we see reduced transparency, meta has provided more transparency than it did today. And you have an increased tiktokization of media in that facebook is now moving towards a tiktok model. It is unconnected algorithm driven contact. These pockets are going to grow and we will never see them unless we mandate that we have to see them. Sen. Romney mr. Kaine . Mr. Cain thank you, senator. Tiktok has seen a number of leaks and i have discussed this with former tiktok employees. The company is not transparent and doing what it promises. It has this connection with the Chinese Communist party and will not be transparent about what it is doing. China is not a place that values transparency. It is a oneparty, authoritarian state. I do not think we can count on a company such as tiktok or its parent, tencent, to do anything to address the problem at hand. To be honest, this committee is uniquely placed to address this problem of transparency because the subsidy power that can be used here would require tiktok to open up its emails, show us what is going on, and what the chinabased executives are saying with the american executives. Sen. Romney i share your view on that regard. I question whether we should allow an authoritarian regime to have a social media capability of the scale they have in our country together the data they have. To gather the data they have. I have a lot of grandkids these days concerned about their exposure to social media, and other countries figured out a better way. A better way to try and reduce the draw and the compelling nature of social media. I understand between various tiktok segments, china has a five second gap where the screen goes blank. Are we not doing what other nations are doing to protect our kids . I will let anyone respond to that, maybe brian. Mr. Boland it is a tricky one about the mandate and the steps we take forward. You are describing friction. There are known steps we can take in introducing friction. Sen. Romney has the nation done some of those things . Mr. Boland i do not know about mandated friction. Europe has done a good job of requiring transparency, but i have not seen that described for specific products. Senator hawley, you are recognized for your questions. Sen. Hawley when were you at facebook . I was there from 2009 to november 2020. Senator hawley was its normal during your time at facebook for executives or team members of facebook to coordinate closely with the United States government . I am not aware of that. Sen. Hawley you never had any contact with u. S. Government employees during your time at facebook . Not that i recall not that i recall. Sen. Hawley there was an email saying that our teams met today to discuss misinformation pain on july 23, 2021, facebook thanked the department to meet earlier today. We are policies and we adjust polities policies. We have a vaccine related content. On april 16, 2021, the white house circulated zoom meeting invitation about vaccine misinformation. A facebook employee thanked the cdc to responding to miss incorporation misinformation queries. We will get to those as soon as authorization happens. On july 2022, a facebook employee reached out to the cdc about doing a monthly misinformation debunking meeting. On may 11, 2021, facebook employees organized a be on the lookout meeting with cdc officials. On july of time 21, 2021, anyway we can get this pulled down and cited an Instagram Account within 45 seconds. Facebook replied, quote, we are on it. Is that normal on your time in facebook . I do not have experience around that. Sen. Hawley you do not have any experience on this and suddenly it is 2020 since you left. And you do not know anything about it at all . I do not. Sen. Hawley that is remarkable. I thought he where the former Vice President of marketing engineering and analytics at facebook. And none of this ever happened. Why do you think this happened since you left . What drove this collaboration where you have facebook becoming an arm of the United States government to censor private information, personal speech, at the behest of government officials . Mr. Boland the specific context of what we were talking about, whether it was public or personal content, from what i have read in the same documents you have access to, there were a lot of steps taken around Covid Response and covid misinformation that may have presented a unique scenario where the company took steps to coordinate that way. Sen. Hawley by which you mean to censor the speech of ordinary americans at the behest of the president of the United States and his administration. I commend to everyone who is interested, these emails which were discovered by the litigation, they are suing these tech companies, including your former employer, mr. Boland. They discovered this trove of information on coordination that is extensive between the Biden Administration and the company. Early on, if you questioned that covid had anything to do with a lab, you were marked as disinformation. You were censored, only to have the president of the United States admit that the possibility the that covid came from a lab is a viable theory. We see the same thing about people who have questions about masks and vaccine efficacy. What safeguards when you are at facebook were in place to protect americans from having government censors like this access personal information . Mr. Boland during my time, the company was more reluctant to take down speech and very careful about trying to remove content. I do not think the company studies the content on the platform as severely as you would like. Sen. Hawley not like they were doing later when they were looking at Instagram Accounts and removing them . Mr. Boland that is not a scenario i came across. It is hard for me to comment on the covid pandemic response. Sen. Hawley i find it hard to believe that suddenly facebook became an entirely different entity and was interfacing with the United States government in a different way only when covid happened. Lets meet ask you, mr. Roetter, you are an engineer at twitter . Mr. Roetter that is correct. Sen. Hawley yesterday, mr. Zatko testified to another committee i sit on, that 4000 engineers at twitter had access to all of the personal information, user data, geo locations of twitter users. Is that accurate . Mr. Roetter he joined the company after i left, so i do not know if that is accurate. Sen. Hawley but you are an engineer. Did you have access to user data . Mr. Roetter i was the head of engineering for the entire company. Sen. Hawley i am looking for a yes or a no. Mr. Roetter no. I think i could have gotten it. Sen. Hawley if you could get it, that is what we call access. Did you ever access any user data . Mr. Roetter no. Sen. Hawley were you ever aware of twitter engineers dock sing twitter users . Mr. Roetter no. Sen. Hawley mr. Zatko that he saw that been to unpack there. Thank you, mr. Chairman. Thank you for the witnesses for being here today. Transparency and accountability, those are the words of the day because we know that social Media Companies i am a computer programmer. How you analyze the data and the data tells the story if you are smart enough to listen to it. We look at data is in order to enhance the predictive engagement algorithms in the targets, recommendations based on other content. There are certain vulnerabilities and this is great when you are shopping for a new outfit were new furniture, but maybe not so great when you are on extremist or violent website or looking at harmful content. There has to be Greater Transparency to the platform promotion mechanism. How the content spreads from platform to platform. I believe consumers, and not just the individual, we have small businesses, schools, everyone are on these platforms. When we say consumer, we can go for the individual. We understand these are the algorithms that determine how these things reach their feed. Some social media platforms have standards in place for content that they are often inconsistent with implementing the policies, but the content flourishes. I want to cut to the chase. Mr. Roetter and mr. Boland, is there a difference now and how predictive User Engagement algorithms behave versus other content, and how might we regulate other agnostic algorithms . Agnostic of a guerrilla agnostic algorithms to identify illegal or extremist content. How do we take the agnostic out of the math . Mr. Roetter . Mr. Roetter today, the algorithms are doing what they are intended to do, which is to maximize engagement on the platform. These companies are very smart and have a lot of engineers with a lot of money and computational power. They will change what the algorithms do. If certain types of content if companies were penalized for sharing certain types of content, these algorithms would no longer promote that content because it would be not optimal for them to do so. The extra benefit they gave get would not be worth the penalty they get paid without transparency, we are not going to today, and they will behave optimally on any incentive they have. Today, they are maximizing engagement, but you could change that. Mr. Boland not only do we not know what is happening on the platforms, but platforms do not know what is happening on the platforms. The turning point for me to go to concerns is that nothing on january 6 happened on a platform. Then it turns out after the fact that we find out there is a lot that was spread on the platform and there were concerns around it. In order to change these algorithms, part of it is understanding what is happening and having conversations about what we think the right distribution is. Facebook has proven that with things like qanon, after the fire was lit and burned through, it managed the distribution of that type of content. It is possible when we go managing towards it. The problem is that it is all afterthefact and after the damage has been done. You go back and say there is this set of articles on conversations. Rather than say we have a whole new surge of people who can quickly spot things and read the issues and adjust them. It is complicated. Human speech is very complicated and nuanced. Sen. Rosen they want to have this lack of understanding so they have deniability on the back end. You do not know what is happening and it happened after the fact. Their lack of wanting to do the analysis ahead of time and understanding their own platform , they are setting themselves up for deniability, but we are going to move on to cybersecurity. We know the whistleblower from twitters former head of security depicted that they were unable to protect 238 twitter users, influential figures, heads of state from spam and security breaches. Complaints on the servers for running on Vulnerable Software and shows a dire fact about the lack of protection for user data. I am concerned about cybersecurity. Companies are labor poses laser focused on growth and not protection, so individuals, schools, critical infrastructure, all of those things they are responsible for here have potential risks. Based on both your experience working at facebook and twitter, is cybersecurity a high enough priority for the social media platforms . Do the social media platform security teams, do they work with Application Development to protect against Cyber Attacks . Are you looking for these breaches . How are you working that, and how does this threaten our own security and even our National Security . The teams do work alongside engineering, but it is not a primary driver like promoting and growth is. You need to build something to drive usage and revenue and make it secure enough. The answer to your question is only known if you know the nature of the threat and if the bad actors try to break in and are not successful. Sen. Rosen there are not things built in for the people trying to breach the data . There is no way that people are actively looking for data breaches . You are finding afterthefact in many cases. There are cases of Penetration Testing and people simulating trying to break into something. Mr. Boland my sense is that they are quite invested in protecting data from a security standpoint. Where there is a will and a desire to make progress, i believe they can. Sen. Rosen thank you. I will yield back. Senator hanson, you are recognized for your questions. Sen. Hassan thank you mr. Chair, and thank you to our witnesses for being in front of the committee today. I want to start with a question that builds on what senator rosen was discussing, and this is to mr. Boland and mr. Roeder. Terrorists have livestreamed their attacks on social media. These livestream attacks encourage individuals to commit future attacks. Are there ways for social Media Companies to quickly identify terrorist content . Mr. Boland four livestream videos, ai could try to spot these types of attacks. They have gotten a lot better since i worked there a few years ago. I am not an expert on the extent , but they have made strides. It is mr. Roetter it is certainly possible. It is a hard technical challenge, you can build them out to figure out the realtime content of videos. They will not be perfect, but you can certainly use them. Sen. Hassan this is an increasing issue because we are seeing this happen much more quickly in part because of the influence of social media too. I look forward to following up with you both on this. Another question for the two of you. Facebook is currently running an Advertising Campaign which is happening. The company says it spends billions of dollars on cybersecurity. These numbers, however, are meaningless without proper context. What metric should these companies provide this committee to help us fully understand their commitment to security and safety . Mr. Boland you are 100 correct on the content of diverse matters. A massive imbalance of investment. They will also give you numbers of employees. The number of engineers matter. If you think about these issues, you could have employees who are nontechnical who can be in a review process to look at content, but what is important is putting half of the engineers on these problems. We want to get an understanding of where they allocate their engineers for these types of problems. They do not need to show their entire chart of how they are working on the metaverse, but they are working on these security issues, these safety issues and how they are allocated by country, topic, etc. That is justifiable to understand, and that is adequate relative to the total number of engineering employees. Mr. Roetter what is important is we get metrics that show what results they are getting, not metrics that equate to what we are trying to do. That would never work in wall street. You cannot tell that you are trying to make profits this quarter, have to show what the results are. On the engagements of the content, we will be able to study. The individual people can look at certain content and certain content spreads widely. After this investment has been made, if it has been changed or not. We need metrics where we can measure the actual result, not us trying to track really hard, so please be happy. I worry that a lot of times because it is so painful, we focus on extreme examples of content. There is a broad swath of content that influences people that does not feel as scary. That is something that terrifies me and that is stuff we do not get to see without transparency. Sen. Hassan i think you for the testimony and i will yield back. Thank you, senator hassan. During your Opening Statements, each of you discussed the elements of profit and these companies, we have talked at length about the process of these hearings. Mr. Boland, you talked about how facebook does not incentivize the limiting of spreading harmful content. Could you tell the committee what metrics inform Employee Compensation . What goes into that . Mr. Boland employees at facebook it is about rewards. The rewards you receive is your cash and stock compensation. If you are building products, you are looking at the success of the products and that success is defined by some set of metrics around whether that product is being used. Let us say you are building a video product. The metrics will be about how many hours are spent watching videos, the user growth, how many people are using that, and whether that is spread geographically or etc. You incentivize on those hard metrics and you are not incentivized around what kind of content you direct your video with. What is the stuff underneath that is driving growth and then it is somebody elses problem. There may be companywide goals where they say that at the company, i was not aware of any safety or trust goals, but the problem is that it does not drive individual behavior. The Company Goals are there and do you think about what you individually and your team are told to deliver. That is always metrics and product growth metrics. That is not success. There is not a trusted safety measure . Mr. Boland my understanding is that the team has been disbanded and moved into the central team. I did not experience Product Placement or others carrying a metric that was incentivizing trust or safety. I agree with all of that. There is that promotion system. The problem with trust and safety metrics is that Companies Look at several goals and one of them is trust and safety. The other metrics always win. If i am engineer building a new Live Streaming video service, if i launch that product and it gets usage, that is a feather under my cap and something i can a that i did. That will help me with emotions, compensation, career investment. If at the last minute i decide not to launch that product because i realize i cannot control some of the safety aspects that we cannot do, i get zero credit for that incident. I have done nothing for the company over the next months. Your future advances would probably be questionable as well. Mr. Roetter a product that i build and then do not lunch is no different than if i just did not show up to work in terms of the future credit have. Not a good place for an employee to be. You can change incentives and you can change the way that people show up not even just through goals but through process. When facebook turned to the desktop part of mobile, Mark Zuckerberg required all products that were demoed to him showed mobile in their demonstration and there were designs around that. He kicked the first team out that did not have that design, and suddenly everyone was thinking about mobile design. If this is part of every design discussion on what the harmful ways this product can drive extremism, you would have a radical change in the way that people show up to those meetings and think about the negative impacts of their product. Mr. Boland, it is my understanding that you voiced objections about how facebook recommendation algorithms are promoting extreme and enabled racist kind content. Is that correct . Mr. Boland yes. What was the result of you expressing your concerns to the leadership board . Mr. Boland particularly disappointing. The board took three internal steps that i thought would help mitigate the problem. One, more external researchers, and then allowing the crowd to share more information per it i had a range of responses from, you are wrong, and that is not the case. That is just my view. I believe you are wrong. And then no counter evidence. Then to some that said, this is a problem, but not something we are working on right now. On those statements that said you were wrong, you are working for a company that was making a lot of decisions on data, but this was something they wanted to ignore. Mr. Boland concerning moment for me was when i had my moment where i came to terms with believing that the product could do more and i started looking at a variety of things that Research Teams were doing internally to see what they were saying and the internal dialogue was troubling. There was an overview of polarization research from june of 2020, and that talked about political polarization. One of the lines we have not researched and have little understanding of racial, ethnic, or political polarization. Mr. Roetter, one of the documents submitted either twitter whistleblower last month was a 2021 study that he commissioned of the sites Integrity Team capabilities. The study found that twitter planned to announce a new product just weeks before the 2020 election. The Integrity Team said, quote, had to beg the product team not to launch for the election because they do not have the capabilities to take action on misinformation on the new product, unquote. The product also found that while product teams elicit feedback for product launches, products are incentivized to be shipped as quickly as possible and thus willing to accept security risks. Are these findings consistent with the pattern of decisionmaking that you saw . Mr. Roetter the caveats to that example happened after i was there. That is consistent. I would be surprised if the product team had done anything else. One would to talk about product managers as the mini ceos from their products and they get feedback from other teams, but it is their decision to launch or not. There is no possible credit or reward for not launching, where there is a possible credit or reward for launching. Since they had some content, it would get some usage, so there is every reason to launch and not worry about other issues. Thank you. Any remaining questions . Please follow up on that particular issue. Twitter spaces and audio function newer to the platform was allegedly rolled out in such a rush that it had not been fully twisted tested for safety. We are told that in the wake of the withdrawal from afghanistan, it was exploited by the taliban and taliban supporters talked about how cryptocurrency could be used to fund terrorism. Is that accurate, mr. Roetter . And second, is that common for twitter to launch products that lack content migration capabilities . You said sometimes they are under pressure to ship products as fast as possible. Is that why it happened . Mr. Roetter twitter in particular has a history of being worried about user growth and rhetoric growth. It is not the runaway success that facebook or google are, so there is often extreme pressure to launch things. If you walked around and ask enough people if you could do something, eventually you will find some one who says no. The point of that was to emphasize that you need to get out and do something. The overwhelming metrics are usage, and he would never get credit for being promoted or getting more compensation if you did not do something because of potential negative consequences on safety or otherwise. Your view is probably someone who says no has a reason not to take action. It is a huge bias towards taking action and launching things at this company. Are you worried about this issue and the taliban having exploited it . Do you think my example is correct that the platform transporter Transparency Act is helpful in having more understanding be made . Mr. Roetter i have not read the draft yet, but having more understanding on what algorithms drive decisionmaking could be extremely fallible valuable. Without examples of that, i would expect this to keep happening. Sen. Portman on the safety and trust issue, particularly on the business decisionmaking processes, meta disbanded its responsible Innovation Team last week. Did you see that . Mr. Boland i did end it was extremely disappointing. Sen. Portman my understanding is that they have had negative effects of development processes. What are you concerned about it . Tell us about how you interacted with Integrity Teams while at facebook. Mr. Boland i know people who left that team. Very High Integrity and intentional about responsible design of products. Without that kind of incentive to help shape other teams, i fear that meta will not continue to provide to have that as part of their conversations. You can think about that group is influencing and indoctrinating, if you will, the engineers how to think about these issues. It is less hardcoded into the incentive structure, which is a missing element. Whether it has driven important conversations about how to ethically design products, this is a valuable unit, and i would not say they are making it a part of everything that they are interweaving it into the company. That is a good way to dodge the question. I do not think they will invest in it as a team. This comes at a time when meta is still developing the metaverse. We do not know how the metaverse is going to play out. I am concerned because the paradox we have seen the past that we understand around content and distribution are different in the metaverse. Before this committee, i spent a lot of time understanding the risks of the metaverse and it feels very risky to me. It feels like the next base that will have underinvestment. That is concerning. Sen. Portman i will give the same question two mr. Roetter in terms of how to evaluate these trust and safety levels, particularly on the transparency team. Mr. Roetter if we get from that more information to illuminate what these algorithms are doing and what the incentive structures are, that will be extremely helpful. Today, we are operating in a vacuum, and we see that people will cherry pick one example and use it as evidence of whatever their theory is that these companies are doing that must be true because here is one example. Effective the matter is that these companies are so massive that you can cherry pick examples to prove almost anything you want about these companies. Without broad scale, representative data from which we can commute compute what is being promoted and reverse engineer what these accented incentives might be, we are never going to change. The issue that we face today is we have to trust without having a robust set of data to understand what is happening and making public conversations not transparent is radical. Too well tell you that it does not like to put its thumb on the scale when it comes to the algorithm. You do not realize you are leaning on the algorithms, so they are already doing a lot to shape discourse and what people experience. We do not get to see it and we have to trust the companies to share information with us that we know they are not sharing. I think that the platform transparency and accountability act is critical and we need to do it quickly because these things are accelerating to understand what is actually happening on the platforms. Mr. Cain, you have the last word. Mr. Cain there are a number of issues addressed today that are significant for the position of america in the world. Major changes i have seen personally, having been in china, russia, and recently ukraine, the world of ocean media, and my greatest concern is we are giving too much ground to utilitarian authoritarian terrien authoritarian regimes to undermine us in anyway they can. The software we are using on the ai, these are ubiquitous. This is not the cold war where we had hardware and missiles pointed at each other. Now we have smart phones and it is entirely possible that the Chinese Communist party has launched major incursions into our data within america to try and undermine our liberal democracy. Sen. Portman that is a sobering conclusion, and i do not disagree. I appreciate your testimony. Thank you all. Sen. Peters let me follow up with a brief question on transparency. It is clear we need to have transparency, the active involvement of researchers who use that data, whether academics or someone who will write scripts, journals, Everyone Needs to be engaged. Do you think there is a way we can protect user data and still provide the kind of data that is necessary for these researchers . Is that possible . Mr. Roetter you will get that pushback in a sense, along with other pushback. We can generate random ideas, hide person or identifiable information. Because the users operate in a professional manner and are trusted, that does not mean it pleases the public. Monitors in the course of scanning financial documents, they look at data that if leaked out, would be extremely valuable to competitors. The reason we have thirdParty Auditors is to have more secure environments. So we could do that. For example, we have various Health Care Data in the world and there are various legislative processes around it, and then manage it in a way that others can extract data without violating personal data. Mr. Boland it is absolutely possible. There are hard aspects to it, but it can be done. There are two aspects that are favorable to that. With the tiktokization of content, you are dealing with issues of private data. We built a system at meta that could connect what people saw on facebook with the purchases they bought in a physical store and we were able to do that in a privacy safe way. If you can do it for ads, you can do it for other areas as well. Sen. Peters thank you, and i would like to thank the three of you for your testimony today. It has provided insightful contributions in what is an important conversation. He appreciate your availability to be part of the first panel of this hearing. This panel will resume this afternoon when we resume our second panel of witnesses. The chief product officers of medico, youtube, tiktok, and twitter. We will reconvene at 2 30 p. M. [captioning performed by the national captioning institute, which is responsible for its caption content and accuracy. Visit ncicap. Org] [captions Copyright National cable satellite corp. 2022]

© 2024 Vimarsana

comparemela.com © 2020. All Rights Reserved.