comparemela.com

Test. Captioning performed by vitac thank you. Thank you, senator peters. Senator johnson. Thank you, mr. Chairman. Mr. Harris, i agree with you when you say that our best line of defense as individuals are exposure. People need to understand that they are being manipulated and a lot of this hearing has been talking about the manipulation algorithms, Artificial Intelligence. I want to talk about the manipulation by human intervention, human bias. We dont allow or we certainly put restrictions through the fcc on an individual owning their ownership of tv stations, radio stations, newspapers because we dont want that monopoly of content in a community, much less, you know, facebook, google, access, billions of people, hundreds of millions of americans. So i have staff on instagram go to the political account by the way, i have a video of this so id like to enter that into the record. The hit follow and this is what the list they were given and this is in the exact order and i would ask the audience and witnesses to just see if there is a conservative in here how many there are. Here is the list. Elizabeth warren, kamala harris, New York Times, huffing ton post, bernie sanders, economist, nancy pelosi, the daily show, washington post, covering potus, nbc, wall street journal, pete buttigieg, time, new yorker, reuters, kirsten gillibrand, aclu, Hillary Clinton, real time with bill maher, un, guardian, huff post womens, late show with stephen colbert, moveon. Org. Usa today, new yorker, late night with seth meyers, the hill, cbs, justin trudeau. It goes on. These are five conservative staff members. If there are algorithms shuffling the content that they might actually want to or they would agree with you would expect you would see maybe fox news, breitbart, news max. You might even see like a really big name like donald trump and there wasnt. So my question is who is producing that list . Is that instagram . Is that a politico site . How is that being generated . I have a hard time feeling thats generated or being manipulated by an algorithm or by ai. I dont know any i would be curious to know what the click pattern was, in other words, you open up an Instagram Account and its blank and youre saying if you just ask who do i follow from an empty account who do i follow and youre given suggestions for you to follow. I honestly have no idea how instagram ranks those things but i would be curious to know what the original clicks were that produced that list. Can anybody else explain that . I mean, i dont believe thats ai trying to give content to conservative staff member of things they may want to read. This to me looks like instagram, if they are actually the ones producing that list, trying to push a political bias. Mr. Wolfram, you seem to want to weigh in. You know, the thing that will happen is if theres no other information it will tend to be just what where there is the most content or where the most people on the platform in general have clicked. So it may simply be a statement in that particular case that and im really speculating, but that the users of that platform tend to like those things and so if theres no other so, again, you would have to assume, then, that the vast majority of users of instagram are liberal progressives. There might be evidence of that. Ms. Stanphill, is that what your understanding would be . Thank you, senator. By the way, we can probably do that on google, too, it would be interesting. I cant speak for twitter. I can speak for googles stance just generally with respect to ai which is we build products for everyone. We have systems in place to ensure no bias is introduced. But we have i mean, you wont deny the fact that there are plenty of instances of content being pulled off of conservative websites and trying to repair the damage of that, correct . I mean, whats happening here . Thank you, senator. I wanted to quickly remind everyone that i am a User Experience director and i work on digital well being, which is a program to ensure that users have a balanced relationship with tech. Mr. Harris, whats happening on here . Again, i think conservatives have lake effect mat concern that content is being pushed from a liberal progressive standpoint to the vast majority of users of these social sites. I mean, i really wish i could comment, but i dont know much about where thats happening. Ms. Richardson . So there has been some research on this and it showed that when youre looking at engagement levels there is no partisan disparities, in fact, its equal, so i agree with dr. Wolfram in that what you may have saw is just what was trending, even in the list you mentioned Southern Poverty Law Center and they were simply trending because their executive director was fired so that may just be a result of the news, not necessarily the organization. But its also important to understand that research has also shown that when there is any type of disparity on partisan lines its usually dealing with the veracity of the underlying content and thats more of a content moderation issue rather than what youre shown. Okay. Anyway, id like to get that video entered into the record and we will keep looking at this. I think if you google yourself you will find most of the things that pop up right away will be from news organizations that tend to be to the left. I have had that experience as well and it seems like if that actually was based upon a neutral algorithm or some other form of Artificial Intelligence that since you are the user and since they know your habits and patterns you might see something instead of from the New York Times pop up from fox news or the wall street journal. That to me has always been hard to explain. Lets Work Together to try to get that explanation because its a valid concern. Senator tester. Thanks. Thank you, mr. Chairman. Thank all the folks who have testified here today. Ms. Stanphill, does youtube have access to personal data on a users Gmail Account . Thank you, senator. I am an expert in digital well being at google so, im sorry, i dont know that with depth and i dont want to get out of my depth. So i can take that back for folks to answer. Okay. So when it comes to Google Search history, you wouldnt know that, either . Im sorry, senator, im not an expert in search and i dont want to get out of my depth, but i can take it back. Okay. All right. Let me see if i can ask a question that you can answer. Do you know if youtube uses personal data in shaping recommendations . Thank you, senator. I can tell you that i know that youtube has done a lot of work to ensure that they are improving recommendations. I do not know about privacy and data because that is not necessarily core to digital well being. I focus on helping provide users with balanced technology usage. So in youtube that includes a time watch profile, it includes a reminder where if you want to set a time limit you will get a reminder. I got it. Ultimately we we give folks power to basically control their usage. I understand what youre saying. I think that what im concerned about is that if and it doesnt matter if youre talking google or facebook or twitter, whoever it is, has access to personal information which i believe they do, mr. Harris, do you think they do . I wish that i really knew the exact answer to your question. Does anybody know the answer to that question . I mean, the general premise is that with more personal access to information that google has they can provide better recommendations is usually the talking point. Thats correct. And the Business Model because theyre competing for who can predict better what will keep your attention my eyes on that website. Yeah, they would use as much information as they can and usually the way that they get around this is by giving you an option to opt out, but of course the default is usually to opt in. And thats, i think, whats leading to what youre talking about. Yes. So i am 62 years old, getting older every minute the longer this conversation goes on, but i will tell you that it never ceases to amaze me that my grandkids, the oldest one is about 15 or 16, down to about 8, when we are on the farm, is absolutely glued to this. Absolutely glued to it. To the point where if i want to get any work out of him i have to threaten him. Okay . Because they are riveted. So ms. Stanphill, do you guys when you are in your leadership meetings, do you actually talk about addictive nature of this . Because its as addictive as a cigarette or more. Do you talk about the addictive nature . Do you talk about what you can do to stop it . I will tell you that im probably going to be dead and gone and i will probably be thankful for it when all this shit comes to fruition because i think that this scares me to death. Senator johnson can talk about the conservative websites. You guys could literally sit down at your board meeting, i believe, and determine who is going to be the next president of the united states. I personally believe that you have that capacity. Now, i could be wrong and i hope im wrong. And so does it do any do any of the other folks that are here, i will go with ms. Robinson, do you see it the same way or am i overreacting to a situation that i dont know enough about . No, i think your concerns are real in that the Business Model that most of these companies are using and most of the Optimization Systems are built to keep us engaged, keep us engaged with provocative material that can skew in the direction that your concerns lead to. And i dont know your mystery, but do you think that the board of directors for any of these companies actually sit down and talk about impacts that im concerned about, or are they talking about how they continue to use what theyve been doing to maximize their profit margin . I dont think theyre talking about the risk youre concerned about and i dont even think thats happening in the Product Development level and thats in part because a lot of teams are siloed, so i doubt these conversations are happening in a holistic way to sort of address your concerns, which is thats good. I dont want to get into a fistfight on this panel. Ms. Stanphill, are the conversations you have since you couldnt answer the previous ones indicate that shes right, the conversations are siloed, is that correct . No, thats not correct, sir, senator. So why cant you answer my questions . I can answer the question with respect to how we think about digital well being at google. Its across company okr so its a goal we work on across the company. I have the novel duty of connecting those dots, but we are doing that and we have incentive to make sure that we make progress. Okay. I just want to thank you all for being here and hopefully you all leave friends because i know that theres certain senators including myself thats tried to pit you against one another. Thats not intentional. I think that this is this is really serious. I have exactly the opposite opinion that senator johnson has in that i think theres a lot of driving to the conservative side, so it shows you that when humans get involved in this were going to screw it up, but by the same token there needs to be Circuit Breakers that senator schatz talked about. Thank you very, very much. Thank you to the old geezer from montana. Senator rosen. Thank you all of you for being here today. I have so many questions as a former Software Developer and systems analyst. I see this really as i have three issues and one question. So the issue one really is going to be theres a combination happening of machine language, Artificial Intelligence and quantum computing all comes together that exponentially increases the capacity of predictive analytics. It froze on itself. This is what its meant to do. Issue two, the monetization of data broke ring of these analytics and the bias in all areas in regards to the monetization of this data. And then as you spoke earlier, where does the ultimate liability lie . With the scientists that craft the algorithm, with the computer that potent sheets the data and algorithm or the company or persons who monetize the end use of the data for whatever means. Right . So three big issues. Many more, but on its face. But my question today is on transparency. So many sectors we require transparency, we are used to it every day. Think about this for potential harm. So every day you go to the grocery store, the market, the Convenience Store and the Food Industry we have required nutrition labeling on every single item, it clearly discloses our nutrition content. We even have it on menus now, calorie count, oh, my, maybe i wont have that alfredo, right, you will go for the salad. We have accepted this, all of our companies have done this, its a state of there isnt any food that doesnt have a label. Maybe there is some food, but basically we have it. So to empower consumers, how do you think we could address some of this transparency that maybe at the end of the day we are all talking about in regards to this these algorithms, the data, what happens to it, how we deal with it, its overwhelming. I think with respect to things like nutrition labels, we have the advantage that we are using 150yearold science to say what the chemistry of what is contained in a food is. Things like computation and ai are a bit of a different kind of science and they have this feature that this phenomenon of computational reduce ability happens and its not possible to just give a quick summary of what the effect of this computation is going to be. But we know i know having written algorithms for myself i have kind of an expected outcome. I have a goal in there. You talk about no goals. There is a goal. Yeah. Whether you meet it or not, whether you exceed it or not, about whether you fail or not there is a goal when you write an algorithm to give somebody who is asking you for this data what they want the confusing thing is that the practice of Software Development has changed and its changed in Machine Learning and in ai. Thats correct. And so they can create their own goals, Machine Learning does its not quite its own goals, its rather that when you write an algorithm, you know, i expect, you know, when i started using computers a ridiculously long time ago also, you would write a Small Program and you would know what every line of code was supposed to do. With quantum computing you dont, but but you still should have some ability to control the outcome. Well, i think my feeling is that rather than saying i mean, yes, there are you can put constraints on the outcome. The request he is how do you describe those constraints . And you have to essentially have Something Like a program to describe those constraints. Lets say you want to say we want to have balanced treatment okay. So lets take it out of technology and just talk about transparency in a way we can all understand. Can we put it in english terms that were going to make your data well being, how you use it, do you sleep, dont you sleep, how many hours a day, think about your fit bit, who is it going to . We can bring it down to those english language parameters that people understand. I think some parts of it you could. I think the part that you cannot is when you say were going to make this give unbiased treatment of, you know, lets say political directions or something im not even talking unbiased and political directions. Theres going to be bias in age, in sex, in race, in ethnicity theres inherent bias in everything. So that given you can still have other conversations. I mean, my feeling is that rather than labeling rather than saying we will have a nutrition label like thing that says what this algorithm is doing, i think the better strategy is to say lets give some third party the ability to be the brand that finally decides what you see. Just like with different newspapers, you can decide to see your news through the wall street journal or through the New York Times or whatever else. Who is ultimately liable if people get hurt by the monetization of this data or the data brokering of some of it . Thats a good question. I mean, thats i think that it will help to break apart the underlying platform. Something like facebook you kind of have to use it, there is a network effect. You cant say lets break facebook into a thousand different facebooks and you can pick which one you want to use. Thats not an option. What you can do is say when there is a news feed being delivered is everybody seeing a news feed with the same set of values, with the same brand or not. I think the realistic thing is to say have separate providers for that final news feed. Thats a possible direction. There are a few other pocke possibilities. So your sort of label says this is the suchandsuch labeled news feed, people get a sense of is that the one i like, is that the one thats doing something reasonable . If its not they are just as a market matter reject it. Thats my thought. I think im way over my time. We could all have a Big Conversation here. I will submit more questions for the record. Into u. Thank you, senator rosen. My apologies to the senator from new mexico who i missed. You were up, actually, before senator from nevada, but senator udall is recognized. Thank you, mr. Chairman. And thank you to the panel. Very, very important topic here. Mr. Harris, im particularly concerned about the radicalizing effect that algorithms can have on Young Children and its been mentioned here today on several questions. Id like to drill down a little deeper on that. Children can be inadvertently can inadvertently stumble on extremist material in a number of ways, by searching for terms they dont know are loaded with subtext, by clicking on shocking content designed to catch the eye, by getting unsolicited recommendations on content, designed to engage their attention and maximize their viewing time. Its a story told over and over by parents who dont understand how their children have suddenly become engaged with the altright and White Nationalist groups or other extremist organizations. Can you provide more detail how young people are uniquely impacted by these persuasive technologies and the consequences if we dont address this issue promptly and effectively . Thank you, senator. Yes, this is one of the issues that most concerns me. As i think senator schatz mentioned at the beginning, theres evidence that in the last month even as recently as that keeping in mind that these issues have been reported on for years now there was a pattern identified by youtube that young girls who had taken videos of themselves dancing in front of cameras were linked in usage patterns to other videos like that that went further and further into that realm and that was just identified by youtube as a super computer as a pattern. Its a pattern of this is a kind of pathway that tends to be highly engaging. The way that we tend to describe this, if you imagine a spectrum on youtube, on my left side there is the calm Walter Cronkite section of youtube, on the righthand side theres crazy town, ufos, conspiracy theories, big foot, whatever. If you take a human being and you could drop them anywhere, you could drop them in the calm section or you could drop them this crazy town. If im youtube and i want you to watch more which direction from there am i going to send you . Im never going to send you to the calm section, im always going to send you towards crazy town. Now you imagine 2 billion people like an ant colony of humanity and its tilting the Playing Field towards the crazy stuff. The specific examples of this, a year ago a teen girl who looked at a dieting video on youtube would be recommended anorexia videos because that was the more extreme thing to show the voodoo doll that looked like a teen girl, the next thing to show is anorexia. If you looked at a nasa moon landing it would show flat earth conspiracy theories recommended hundreds of millions of times before being taken down recently. 50 of White Nationalists in a study had said it was youtube that had red billed them, red pilling is the term for the opening of the mind. The best predictor of whether you will believe in a Conspiracy Theory is whether you can get you to believe in one Conspiracy Theory. It makes you doubt and question and things get paranoid. The problem is that youtube is doing this in mess and its created 2 billion personalized truman shows. Each channel has that radicalizing direction. If you think about it from an accountability perspective, back when we had Janet Jackson at the super bowl and we had 60 million americans on the other, we had a five second tv delay and a bunch of humans in the loop for a reason. What happens when you have 2 billion trumen shows, 2 billion possible Janet Jacksons and 2 billion people on the other end, its a digital frank stein thats hard to control. From there we can talk about how to regulate it. Ms. Stanphill, you have heard him just describe what google does with young people. What responsibility does google have if its algorithms are recommending harmful videos to a child or a young adult that they otherwise would not have viewed . Thank you, senator. Unfortunately the research and information cited by mr. Harris is not accurate, it doesnt reflect current policies, nor the current algorithm. So what the team has done in effort to make sure these advancements are made, they have taken such content out of recommendations, for instance. That limits the views by more than 50 . So are you saying you dont have any responsibility . Thank you, senator. Because clearly young people are being directed towards this kind of material. Theres no doubt about it. Thank you, senator. Youtube is doing everything that it can to ensure child safety online and works with a number of organizations to do so and will continue to do so. Do you agree with that, mr. Harris . I dont because i know the researchers who are unpaid who stay up until 3 00 in the morning trying to scrape the data sets to show what the actual results are and its only through huge amounts of public pressure that they have tackled bit by bit, bits and pieces of it. If they were truly acting in responsibility they would be doing so preemptively without the unpaid researchers staying up until 3 00 in the morning doing that work. Thank you, mr. Chairman. Thank you, senator udall. Senator sullivan. Thank you, mr. Chairman. I appreciate the witnesses being here today. Very important issues that we are all struggling with. Let me ask ms. Stanphill, i had the opportunity to engage in a couple rounds of questions with mr. Zuckerberg from facebook when he was here. One of the questions i asked which i think we are all trying to struggle with is this issue of what what you when i say you, google or facebook, what you are. There is this notion that you are a tech company, but some of us think you might be the worlds biggest publisher. I think about 140 Million People get their news from facebook. When it combines google and facebook i think its somewhere north of 80 of americans get their news. So what are you . Are you a publisher . Are you a tech company . And are you responsible for your content . I think thats another really important issue. Mark zuckerberg did say he was responsible for their content, but at the same time he said that they are a tech company, not a publisher. As you know, whether you are one or the other is really critical almost the threshold issue in terms of how and to what degree you would be regulated by federal law. So which one are you . Thank you, senator. As i might remind everybody, i am a User Experience director for google and so i support our digital well being initiative. With that said i know we are a tech company. Thats the extent to which i know this definition that youre speaking of. So do you feel you are responsible for the content that comes from google on your websites . When people do searches . Thank you, senator. As i mentioned, this is a bit out of my area of expertise as the digital well being expert. I would defer to my colleagues to answer that specific question. Well, maybe we can take those questions for the record. Of course. Anyone else have a thought on that pretty important threshold question . Yeah, i think mr. Harris. If its okay if i jump in. Yes, sure. Thank you, senator. The issue here is that section 230 of the Communications Decency act its all about section 230. Its all about section 230. Has obviously made it so that the platforms are not responsible for any content that is on them which freed them up to do what weve created today. The problem is if you is youtube a publisher . They are not generating the content, they are not paying journalists, they are not doing that but they are recommending things. I think we need a new class between the New York Times is responsible if they Say Something that defames someone else. That reaches a certain 100 million or so people. When youtube recommends flat earth conspiracy theories hundreds of millions of times and if you consider that 70 of youtubes traffic is driven by recommendations meaning driven by what they are recommending, what an algorithm is choosing to put in front of the eyeballs of a person, if you were to backwards derive a motto it would be with great power comes no responsibility. Let me follow up on that. Two things real quick because i want to make sure i dont run out of time here. Its a good line of questioning. You know, when i asked mr. Zuckerberg he actually said they were responsible for their content. That was in a hearing like this. Now, that actually starts to get close to being a publisher from my perspective. So i dont know what googles answer is or others, but i think its an important question. Mr. Harris, you just mentioned something that i actually think is a really important question. I dont know if some of you saw tim cooks commencement speech at stanford a couple weeks ago. I happened to be there and saw it, i thought it was quite interesting, but he was talking about all the great innovations from silicon valley, but when he said, quote, lately it seems this industry is becoming better known for a less noble innovation, the belief that you can claim credit without accepting responsibility. Then he talked about a lot of the challenges and then he said, it feels a bit crazy that anyone should have to say this, but if youve built a chaos factory you cant dodge responsibility for the chaos. Taking responsibility means having the courage to think things through. So im going to open this up, kind of final question and maybe we start with you, mr. Harris. What do you think he was getting at . It was a little bit generalized, but he obviously put a lot of thought into his commencement speech at stanford. This notion of building things, creating things, and then going, whoa, whoa, im not responsible for that. Whats he getting at and then i will open it up to any other witnesses. I thought it was a good speech, but id like your views on it. Yeah, i mean, i think its exactly what everyone has been saying on this panel, that these things have become digital frankensteiner that are ter ra forming the world in their image, whether its the Mental Health of children or our politics and political discourse. Without taking responsibility for taking over the public square. So, again, it comes back to who do you think is responsible . I think we have to have the platforms be responsible for when they take over election advertising they are responsible for protecting elections. When they take over Mental Health of kids or saturday morning they are responsible for protecting saturday morning. Anyone else have a view on the quotes i gave from tim cooks speech . I think one of the questions is what do you want to have happen . That is, what you know, when you Say Something bad is happening, what is the you know, its giving the wrong recommendations. By what definition of wrong . What is the you know, who is deciding, who is kind of the moral arbiter . If i was running one of these automated content selection companies, my company does something different, i would be i would not want to be kind of a moral arbiter for the world, which is what is effectively having to happen when theres sort of decisions being made about what content will be delivered, what will not be being delivered. My feeling is the right thing to have happened is to break that apart, to have a more marketbased approach, to have third parties be the ones who are responsible for sort of that final decision about what content is delivered to what users. So that the platforms can do what they do they well, which is the kind of large scale engineering, large scale monetization of content, but somebody else gets to be somebody that users can choose from, a third party gets to be the one who is deciding the final ranking of content shown to particular users so users can get, you know, brand allegiance to particular content providers that they want and not other ones. Thank you, mr. Chairman. Thank you, senator sullivan. Senator markey. Thank you, mr. Chairman, very much. Youtube is far and away the top website for kids today. Research shows that a whopping 80 of 6 through 12 year olds 6 through 12 year olds use youtube on a daily basis. But when kids go on youtube far too often they encounter inappropriate and disturbing video clips that no child should ever see. In some instances when kids click to view cartoons and characters in their favorite games, they find themselves watching material promoting selfharm and even suicide. In other cases kids have open videos featuring beloved disney princesses and all of a sudden see a sexually explicit scene. Videos like this shouldnt be accessible to children at all let alone systematically served to children. Mr. Harris, can you explain how once a child consumes one inappropriate Youtube Video the websites algorithms begin to prompt the child to watch more harmful content of that sort. Yes, thank you, senator. So if you watch a video about a topic, lets say its that cartoon character, the hulk or Something Like that, youtube picks up some pattern that maybe hulk videos are interesting to you. The problem is theres a dark market of people who you are referencing in that long article that was very famous who actually generate content that based on the most viewed videos, they will look at the thumbnails and say there is a hulk in that video, a spiderman in that video and then they have machines actually manufacture free generated content and then upload it to youtube, machines, and tag it in such a way that it gets recommended near those content items and youtube is trying to maximize traffic for each of these publishers. So when these machines upload the content it tries to dose them with some views saying maybe this video is really good and it ends up gathering millions and millions of views because kids, quote unquote, like them. As i said in the opening statement, this is about an a settlement friday of power being asked as an equal relationship. Technology so the 6 to 12yearold they are just keep getting fed the next video, the next video, the next video. Correct. And theres no way that that can be a good thing for our country over a long period of time. Especially when you realize the asymmetry that youtube is pointing a super computer at that childs brain. Clearly the way the websites are designed can pose serious harm to children and thats why in the coming weeks i will be introducing the kids internet design and safety act, the kids act, specifically my pill about combat amplification of inappropriate and harmful content on the internet, Online Design features like auto play that coerce children and create bad habits and commercialization and marketing that manipulates kids and push them into consumer culture. So to each of todays witnesses, will you commit to working with me to enact strong rules that tackle the design features and underlying issues that make the internet unsafe for kids . Mr. Harris . Yes. Ms. Stanphill . Yes. Its a terrific goal, but its not particularly my expertise. Okay. Yes. Okay. Thank you. Ms. Stanphill, recent reporting suggests that youtube is considering significant changes to its platform including ending auto play for childrens videos so that when one video ends another doesnt immediately begin, hooking the child on to long viewing sessions. A call for an end to auto play for kids, can you confirm to this committee that youtube is getting rid of that future . Thank you, senator. I cannot confirm that as a representative from digital well being. Thank you. I can get back to you, though. I think its important and i think its very important that that happen voluntarily or through federal legislation to make sure that the internet is a healthier place for kids. Senators blunt and schatz and myself, senator sasse, senator collins, senator bennet are working on a bipartisan children and Media Research advancement act that will commission a fiveyear 95 Million Research initiative at the National Institutes of health to investigate the impact of tech on kids. It will produce research to shed light on the cognitive, physical and socio emotional impacts of technology on kids. I like forward on that legislation to working with everyone at this table as well so that we can design legislation and ultimately a program. I know that google has endorsed the camera act, ms. Stanphill, can you talk to this issue . Yes, thank you, senator. I can speak to the fact that we have endorsed the camera act and look forward to working with you on further regulation. Okay. Same thing for you, mr. Harris . Weve also endorsed it the center for humane technology, yeah. Thank you. I just think were late as a nation to this subject, but i dont think that we have an option. We have to make sure that there are enforceable protections for the children of our country. Thank you, mr. Chairman. Thank you, senator markey. Senator young. I thank our panel for being here. I thought i would ask a question about concerns that many have and i expect concerns will grow about ai becoming a black box where its unclear exactly how certain platforms make decisions. In recent years deep learning has proved very powerful at solving problems and has been widely deployed for tasks like image captioning, Voice Recognition and language translation. As the Technology Advances there is great hope for ai to diagnose deadly diseases, calculate multimillion dollar trading decisions and implement successful autonomous innovations for transportation and other sectors. Nonetheless the intellectual power of ai has received public scrutiny and has become unsettling for some futurists. Eventually Society Might cross a threshold in which using ai requires a leap of faith. In other words, ai might become as i say a black box where it might be impossible to tell how in ai that has internalized massive amounts of data is making its decisions through its Neural Network and by extension it might be impossible to tell how those decisions impact the psyche, the perceptions, the Human Understanding and perhaps even the behavior of an individual. In early april the European Union released final ethical guidelines calling for what it calls trustworthy ai. The guidelines arent meant to be or intend to interfere with policies or regulations, but instead offer a loose framework for stakeholders to implement their recommendations. One of the key guidelines relates to transparency and the ability for ai systems to explain their capabilities, limitations and decisionmaking. However, if the improvement of ai requires, for example, more complexity imposing transparency requirements will be equivalent to a prohibition on innovation. So i will open this question to the entire panel, but my hope is that dr. Workman wolf ram, im sorry, sir, you can begin. Can you tell this committee the best ways for congress to slab plate with the Tech Industry to ensure ai system accountability without hindering innovation, and specifically should congress implement industry requirements or guidelines for best practices . Its a complicated issue. Yes. I think that it varies from industry to industry. I think in the case what have were talking about here, internet, automated content selection, i think that the right thing to do is to insert a kind of level of human control into what is being delivered, but not in the sense of taking apart the details of an ai algorithm but making the structure of the industry be such that there is some human choice injected no what people whats being delivered to people. I think the bigger story is we need to understand how were going to make laws that can be specified in computational form and applied to ais. Were used to writing laws in english, basically, and were used to being able to say, you know, write down some words and then have people discuss whether theyre following those words or not. When it comes to Computational Systems that wont work. Things are happening too quickly, theyre happening too often. You need something where youre specifying computation nael this is what you want to have happen and then the system can perfectly well be set up to automatically follow those computational rules or computational laws. The challenge is to create those computational rules and thats something were just not yet experienced with. Its something were starting to see computational contracts as a practical thing in the world of block chain and so on, but we dont yet know how you would specify some of the things that we want to specify as rules for how our systems work. We dont yet know how to do that computation nael. Are you familiar with the eus approach to develop ethical guidelines for trustworthy ai . Im not familiar with those specifics. Okay. Are any of the other panelists . Okay. Well, then perhaps thats a model we could look at. Perhaps that would be illadvised for stakeholders that may be watching these proceedings or listening to them, they can tell me. Do others have thoughts . So in my written comments i outlined a number of transparency mechanisms that could help address some of your concerns and some of the recommendations, one specifically which was the last one, is we suggested that companies create an algorithm mick Impact Assessment and that framework which we originally wrote for government use can actually be applied in the private sector and we built the framework from learning from different assessments n u. S. We use environmental Impact Assessments which allows for robust conversation about Development Projects and their impact on the environment, but also in the eu, which is one of the Reference Point that we use, this he have a Data Protection Impact Assessment and thats something that is done both in goth r government and in the private sector, but the difference here and why i think its important for congress to take action is what were suggesting is something thats actually public. So we can have a discourse about whether this is a technological tool that has a net benefit for society or is something thats too risky that shouldnt be available. I will be attentive to your proposal. Do you mind if we work with you, a lie doing, if we have any questions about it . Yes, very much. All right. Thank you. Others . Have any thoughts . Its okay if you dont. Okay. Sounds like we have a lot of work to do, industry working with other stakeholders to make sure that they wont act impressively but we also dont neglect this area of Public Policy. Thank you. Thank you, senator young. Senator cruz. Ms. Stanphill, a lot of americans have concerns that Big Tech Media Companies and google in particular are engaged in political senatorsh censorship. Google enjoys a special immunity from liability under section 230. The predicate for that immunity was that google and other Big Tech Media Companies would be neutral public forum. Does google consider itself a neutral public forum . Thank you, senator. Yes, it does. Okay. Are you familiar with the report that was released yesterday from veritas that included a whistleblower from within google, that included videos from a Senior Executive at google, that included documents that are purportedly internal powerpoint documents from google . Yes, i heard about that report in industry news. Have you seen the report . No, i have not. So you didnt review the report to prepare for this hearing . Its been a busy day and i have a day job, which is digital well being at google. So im trying to make sure i keep the trains on the tra. Im sorry this hearing it pinging on your day job. Its a great opportunity. Thank you. One of the things in that report and i would recommend people interested in political bias at google watch the entire report and judge for yourself, there is a video from a woman, its a secret video that was recorded. As i understand it she is the head of responsible innovation for google. Are you familiar with ms. Jenai . I work in User Experience and i believe that ai group is somebody we worked with on the ai principles, but its a big company and i dont work directly with jen. Do you know her or no . I do not know jen. Okay. As i understand it she has shown in the video saying, and this is a quote, Elizabeth Warren is saying that we should break up google and, like, i love her, but shes very misguided. Like that will not make it better. It will make it worse. Because all these Smaller Companies who dont have the same resources that we do will be charged with preventing the next trump situation. Its like a Small Company cannot do that. Do you think its googles job to, quote, prevent the next trump situation . Thank you, senator. I dont agree with that. No, sir. So a different individual, a whistleblower identified simply as an insider at google with knowledge of the algorithm is quoted on the same report as saying google, quote, is bent on never letting somebody like donald trump come to power again. Do you think its googles job to make sure, quote, somebody like donald trump never comes to power again . No, sir, i dont think that is googles job and we build for everyone, including every single religious belief, every single demographic, every single region and certainly every political affiliation. Well, i have to say that certainly does not appear to be the case. Of the Senior Executives at google, do you know of a single one who voted for donald trump . Thank you, senator. Im a User Experience director and i work on google digital well being. I can tell you we have diverse views, but i cant its a simple question. Do you know of anyone who voted for trump of the Senior Executives. I definitely know of people who voted for trump. Of the Senior Executives at google. I dont talk politics with my work maities. Is that a no. Is that a no to what . Do you know of any Senior Executives, a single executive at the company who voted for donald trump. As the digital well being expert i dont think this is in my purview to comment on so you dont know. Thats all right. You dont have to know. I definitely dont know. I can tell you what the public records show. The public records show in 2016 google employees gave to the Hillary Clinton campaign 1. 315 million. Thats a lot of money. Care to venture how much they gave to the Trump Campaign . I would have no idea, sir. Well, the nice thing is its a round number, zero dollars and zero cents. Not a penny according to the public reports. Lets talk about one of the powerpoints that was leaked. The veritas report has google internally saying i propose we make Machine Learning intentionally humancentered and intervene for fairness. Is this document accurate . Thank you, sir. I dont know about this document so i dont know. Okay. Im going to ask you to respond to the committee in writing afterwards as to whether this powerpoint and the other documents that are included in the veritas report whether those documents are accurate. I recognize that your lawyers may want to write explanation, youre welcome to write all the explanation that you want, but i also want a simple clear answer is this an accurate document that was generated by google. Do you agree with the sentiment expressed in this document . No, sir, i do not. Let me read you another also in this report it indicates that google according to this whistleblower deliberately makes recommendations if someone is searching for conservative commentators, deliberately shifts the recommendation so instead of recommending other conservative commentatorcomment recommends organizations like cnn or msnbc or left leaning political outlets. Is that occurring . Thank you, sir. I cant comment on search algorithms or recommendations given my purview as the digital well being lead. I can take that back to my team, i can take it back to my team. Is it part of digital wellbeing for recommendations to reflect where the user wants to go rather than deliberately shifting where they want to go . Thank you, sir. As a User Experience professional, we focus on delivering on user goals. We try to get out of the way, get them on the task at hand. One of the documents leaked explains what google is doing, and it has a series of steps, and it ends with people, parenthesis, like us, are programmed. Does google view its job as programming people with search results . Thank you, senator. I cant speak for the whole entire company but i can tell you that we make sure that we put our users first in our design. These documents raise very serious questions about political bias of the company. Thank you, senator cruz. Senator shots, anything to wrap up with . A quick statement and then a question. I dont want the working of the reps unresponded to, i wont go into great detail except to say there are members of congress that use working of the reps to terrify google and twitter executives so they dont take action in taking down extreme content, false content, polarizing content, contra their own rules of engagement. I dont want the fact the democratic side of the aisle is trying to engage in good faith on this Public Policy matter and not work the refs allow message to be sent to the leadership that they have to respond to this bad faith accusation anytime we have any conversation what to do in tech policy. My final question for you, this will be the last time i leap to your defense, did you say privacy and data is not core to digital wellbeing . Thank you, sir. I might have misstated how thats being phrased, so what i meant what do you mean to say . I mean to say theres a team that focuses day in, day out on privacy control as relates to user data. Thats outside of my area. Youre talking bureaucratly, the way the company is organized. Im saying isnt privacy, arent privacy and data core to digital wellbeing . I see. Sorry, i didnt understand that point, senator. In retrospect what i believe is that it is inherent in our digital wellbeing principles that we focus on the user and that requires that we focus on privacy, security, control of their data. Thank you. Thank you, senator. To be fair, i think both sides work the refs. Let me ask a follow on question. I appreciate senator blackburns line of questioning from earlier which may highlight some limits on transparency as we have sort of started i think with Opening Statements today, trying to look at ways that in this new world we can provide a level of transparency. You said it is going to be difficult in terms of explainability of ai, but just understanding a little better how to provide users information they need to make educated decisions about how they interact with a platforms services. So the question is might it make sense to let users effectively flip a switch between a filtered algorithm based presentation and unfiltered presentation . I mean, there are already Search Services that aggregate user searches, feed them on mass to such engines like binge so youre effectively seeing results of a generic search, independent of specific information about you. Works okay. There are things for which it doesnt work well. I think the idea of you flip a switch, i think thats probably not going to have great results, i think there will be unfortunately great motivation to have the case where the switch is flipped to not give user information, give bad results. Im not sure how you motivate giving good results in that case. I think it is also it is sort of when you think about that a whole array of other switches, pretty soon it gets confusing for users to decide which switches they flip for what, do they give location information, not this information, give this information, not that information. My own feeling is the most promising direction is to let some third party inserted who will develop a brand, might be 20 third parties, might be like newspapers, people can pick, do they want news from this place, that place, other place, to insert third parties, have more of a market situation where youre relying on the trust that you have in that third party to determine what youre seeing rather than saying the user will have precise detailed control. As much as i would like to see more users be more engaged income pew tagsal thinking, understanding whats happening in Computational Systems, i dont think this is a case where that will work in practice. Anyone else . I think the issue with the flip the switch hypothetical is users need to be aware of the tradeoffs. Currently, so many are used to conveniences of existing platforms. So if theres currently a privacy preserving platform, duck duck go. But if youre used to seeing the most used at the top, duck duck go may not be the choice all users would use, but theyre not hyper aware of what are the tradeoffs of giving that information to the provider, so i think while i understand the reason youre giving that metaphor, it is important for users to understand both the practices of a platform and also to understand the tradeoffs where if they want a more privacy preserving service, what are they losing or gaining from that. Yeah. The issue is also that users i think as already mentioned will quote, unquote prefer because it saves them time and energy, the summarized feed thats algorithm feed thats narrowing it. You show people the reverse feed versus the algorithm one, it saves time and it is more relevant to do that. Even if theres a switch, most people quote, unquote prefer that. They have to be aware of tradeoffs, have a notion of what fair means there. What im most concerned about is the fact this is fairness with respect to increasingly fragmented truth that debases the information environment a democracy depends on of shared narrative. Like to comment on that issue. I think the challenges when you want to sort of have a single shared truth, the question is who gets to decide what that truth is. I think thats the question is, is that decided within a single company, implemented using ai algorithms. I think it makes more sense in the american way of doing things to imagine it is decided by a whole selection of Companies Rather than being something thats burnt into a platform that, for example, has sort of become universal through Network Effects and so on. All right. Thank you all very much. This was a complicated subject, one that i think your testimony and response helped shed light on. Certainly will shape our thinking in terms of how we proceed, but theres definitely a lot of food for thought there. So thank you very much for your time and input today. Well leave the hearing record open a couple of weeks, ask senators if they have for the record, and to get responses back as quickly as possible. Theyll be included in the final hearing record. With that were adjourned. We take you to a hearing on Nuclear Waste and where it can be stored. Several witnesses from the Nuclear Power industry are testifying, along with an attorney for an Environmental Advocacy group. Lisa murkowski chairs the committee. This is live coverage on cspan 3. Good morning, everyone. The committee will come to order. Were meeting to examine an issue that effectively we have been at a

© 2024 Vimarsana

comparemela.com © 2020. All Rights Reserved.