comparemela.com

Card image cap



the subcommittee hearing. >> good morning, everyone. the subcommittee on consumer protection and commerce will now come to order. member -- we'll begin with member statements and i will begin with recognizing myself for five minutes. good morning and thank you for joining us here today. given what's going on in the -- in the world, it's really impressive to see the -- the turnout that is here today. and i welcome everyone. in the two-plus decades since the creation of the internet, we've seen life for americans and their families transformed in many positive ways. the internet provides new opportunities for commerce, education, information, and connecting people. however, along with these many new opportunities, we have seen new challenges, as well. bad actors are stalking the online marketplace using deceptive techniques to influence consumers. deceptive designs to -- to fool them -- we okay? oh, okay. deceptive designs to -- to fool them into giving away personal information, stealing their money, and engineering in other unfair practices. the federal trade commission works to protect americans from many unfair and deceptive practices. but a lack of resources, authority, and even a lack of will, has left many american consumers feeling helpless in this digital world. adding to that feeling of helplessness, new technologies are increasing the scope and scale of the problem. deep fakes, manipulation of video, dark patterns, bots, and other technological technologies are hurting us in direct and indirect ways. congress has, unfortunately, taken a laissez faire approach to regulation of unfair and deceptive practices online over the past decade. and platforms have let them flourish. the result is big tech failed to -- to respond to the grave threats posed by deep fakes and evidence -- as evidence by facebook scrambling to -- scrambling to announce a new policy that strikes me as wholly inadequate. we'll talk about that later. since it would have done nothing to prevent the -- the -- the -- the video of speaker pelosi that amassed millions of views and prompted no action by the online platform. hopefully, our discussion today can change my mind about that. underlying all of this is section 230 of the communications decency act, which provides online platform links, like facebook, a legal liability shield for third-party content. many have argued that this liability shield results in online platforms not adequately policing their -- their platforms. including online piracy and extremist content. thus, here we are with big tech wholly unprepared to tackle the challenges we face today. a top-line concern for this subcommittee must be to protect consumers readiness -- regardless of whether they are online or not. for too long, big tech has argued that ecommerce and digital platforms deserve special treatment and a light regulatory touch. we are finding out that consumers can be harmed as easily online as in the physical world. and in some cases, the online dangers are greater. it is incumbent on us to -- in this subcommittee -- in this subcommittee, to make clear that protecting the -- the -- the protections that apply to in-person commerce also apply to virtual space. i thank the witnesses for their testimony today, and i recognize ranking member rogers for five minutes. >> thank you. thank you, chair. happy new year, everyone. welcome to our witnesses. i appreciate the chair leading this effort today to highlight online deception. i do want to note that last congress, chairman walden also held several hearings on platform responsibility. disinformation is not a new problem. it was also an issue 130 years ago. when joseph pulitzer in the new york world and william randolph hurst, just like click-bait in online platforms today, fake and sensational headlines sold newspapers and boosted advertising revenue. with far more limited sources of information available in the 1890s, the american people lost trust in the media. to rebuild trust, newspapers had to clean up their act. now, the pulitzer is associated with something very different. i believe we are at a similar inflexion point today. we're losing faith in sources we can trust online. to rebuild it, this subcommittee, our witness panel, and members of the media are putting the spotlight on abuses and deception. our committees pass leadership and constructive debates have already led to efforts by platforms to take action. just this week, facebook announced a new policy to combat deep fakes. in part, by utilizing artificial intelligence. i appreciate ms. bickert for being here to discuss this in greater detail. deep fakes and disinformation can be handled with innovation and empowering people with more information. on the platforms they choose and trust, it makes far more productive outcomes when people can make the best decisions for themselves. rather than relying on the government to make decisions for them. that's why we should be focusing on innovation for major breakthroughs, not more regulations or government mandates. as we discuss ways to combat manipulation online, we must ensure that america will remain the global leader in ai development. there is no better place in the world to raise people's standard of living and make sure that this technology is response -- is used responsibly. software is already available to face swap, lip sync, and create facial reenactment to fabricate content. as frightening as it is, we can also be using ai to go after the bad actors and fight fire with fire. we can -- we cannot afford to shy away from it because who would you rather lead the world in machine-learning technology? america? or china? china is sharing its ai surveillance technology with other authoritarian governments, like venezuela. it's also using this technology to control and suppress ethnic minorities, including the uighurs in chinese concentration camps. "the new york times" reported just last month that china is collecting dna samples and could be using this data to create images of faces. could china be building a tool to further crack -- track and crack down on minorities and political dissidents? imagine the propaganda and lies that could develop with the lies behind the great chinese firewall where there is no free speech or an independent press to hold the communist party accountable. that is why america must lead the world in ai development. by upholding our american values, we can use this as a force for good and save people's lives. for example, ai technology and deep-learning algorithms can help us detect cancers earlier and more quickly. clinical trials are already underway making major breakthroughs to diagnose cancers. the continued leadership of our innovators is crucial to make sure that we have the tools to combat online deception. to win the future in a global economy, america should be writing the rules for this technology so that real people, not an authoritarian state like china, are empowered. i'm also glad that we're putting a spotlight on dark patterns. deceptive laws, fake reviews, and bots are the latest version of robo-call scams. i am pleased that the ftc has used its section 5 authority to target this fraud and protect people. we should get their input as to how we discuss how to handle dark patterns. we also must be careful where we legislate so that we don't harm the practices that people enjoy. a heavy-handed regulation will make it impossible for online retailers to provide discounts. that would especially hurt lower and middle-income families. in a digital marketplace, services people enjoy should not get swallowed up by strict definition of a dark pattern. how we make these distinctions is important so i look forward to today's discussion. i want to thank the panel and i yield back. >> the gentle lady yields back and the chair now recognizes mr. chair. >> for fundamental aspects of their daily lives, consumers shop online for products ranging from groceries to refrigerators. they use the internet to telecommute or check the weather and traffic before leaving for the office. and they use social media networks to connect with family and friends and is a major source of news and information. when consumers go online, they understandably assume that the reviews of the products that they buy are real. that the people on the social networks are human. and that the news and information they're reading is accurate. unfortunately, that is not always the case. online actors, including nation states, companies, and individual fraudsters, are using online tools to manipulate and deceive americans. while some methods of deception are well known, many are new and sophisticated. fooling even the most savvy consumers. today, technology has made it difficult, if not impossible, for typical consumers to recognize what's real from what is fake. and why exactly are people putting so much effort into the development and misuse of technology? because they know trust is the key to influencing and taking advantage of people. whether for social, monetary, or political gain, if bad actors can make people believe a lie, then they can manipulate us into taking actions we wouldn't otherwise take. in some instances, we can no longer trust our eyes. videos can be slowed to make someone appear intoxicated. faces can be photo shopped on to someone else's body. audio can be edited in a way that a person's words are basically taken out of context. and the extent of such manipulation has become extreme. machine-learning algorithms can now create completely fake videos known as deep fakes that look real. deep fakes can show real people saying or doing things that they never said or did. for example, face-swapping technology has been used to place actor nicolas cage into movies where he never was. actor/director jordan peele created a deep fake showing president obama insulting president trump. most common use of deep fakes is nonconsensual pornography, which has been used to make it appear as if celebrities have been videotapes in compromising positions. and deep fake technology was also used to humiliate journalist from india who was reporting on an 8-year-old rape victim. automated systems that interact on social media as if they were real people. these bots are used by companies and other entities to build popularity of brands and respond to consumer service requests. even more alarming is the use of these bots by both state and nonstate actors to spread disinformation, which can influence the very fabric of our society and our politics. and manipulation can be very subtle. deceptive designs, sometimes called dark patterns, capitalize on knowledge of our senses, operate to trick us into making choices that benefit the business. have you ever tried to unsubscribe from a mailing list and there's a button to stay subscribed that's bigger and more colorful than the unsubscribe button? and that's deseceptive design. banner ads have been designed with black spots that look like dirt or hair on the screen to trick you into tapping the ad on your smartphone. and there's so many other examples. and since these techniques are designed to go unnoticed, most consumers have no idea they're happening. in fact, they are almost impossible for experts in types of techniques to detect. and while computer scientists are working on technology that can help detect each of these deceptive techniques, we're in a technological arms race. as detection technology improves, so does the deceptive technology. regulators and platforms trying to combat deception are left playing a whack-a-mole. unrelenting advances in these technologies raise significant questions for all of us. what is the prevalence of these deceptive techniques? how are these techniques actually affecting our actions and decisions? what steps are companies and regulators taking to mitigate consumer fraud? and misinformation. so i look forward to beginning to answer these questions with our expert witness panel today. so we can start to provide more transparency and tools for consumers to fight misinformation and deceptive practices. and -- and, madam chair, i just want to say i think this is a very important hearing. i was just telling my colleague, kathy castor, this morning about a discussion we had at our chairs meeting this morning. where the topic was brought up and i -- and i said, oh, you know, we're having a hearing on this today. so this is something a lot of members, and obviously the public, care about. so thank you for having the hearing today. >> the gentleman yields back. and now, the chair recognizes mr. walden, the ranking member of the full committee, for five minutes for his opening statement. >> madam chair, thanks for having this hearing and welcome, everyone. i guess this is the second hearing of the new year. there's one started earlier upstairs but we welcome you all here this important topic and glad to hear from our witnesses today. even those who i'm told have health issues this morning. but thanks for being here. as with anything, the internet presents bad actors with those seeking to harm others, some ample opportunities to manipulate users and take advantage of consumers, which often tend to be some of the most vulnerable in the population. arguably, the digital ecosystem's such that harmful acts are easily exacerbated and as we all know, false information or fake videos spread at breakneck speeds. that's why when i was chairman of this committee, we tried to tackle this whole issue with platform responsibility head on. we appreciate the input we got from many. last congress, as you heard, we held hearings and legislated on online platforms not fulfilling their good samaritan obligations, especially when it comes to online human trafficking. companies' use of algorithms and the impact such algorithms have on influencing consumer behavior. we took a look at that. expanding the reach of broadband services so can benefit from a connected world from the positive aspects of the internet. preservation and promotion of cross-border data flow is a topic we need to continue to work on. other related issues we face in the connected world, such as cybersecurity, internet, artificial intelligence to name just a few. we also invited the heads of the tech industry to come and explain their practices right in this hearing room. two of the committees highest profile hearings. ceo of facebook mark zuckerberg game and spent about five and a half hours to answer some pretty tough questions on the cambridge analytica debacle. as well as provide insight on how facebook collects consumer information and what facebook does with that information. we also welcomed the ceo of twitter to provide inmosight in how twitter operates. so voices don't feel silenced. pleased that chairman palone brought in the ceo of reddit last year and hope the trend will continue as we understand this ever-evolving and critically important ecosystem from those that sit on the top of it. this hearing today helps with that as this group of experts shine a light on questionable practices i hope can yield further fruitful results. often lead to swifter action than any government action can get done. there is proof some companies are cleaning up their platforms and we appreciate the work you are doing. for example, following our hearing on cambridge analytica, facebook made significant changes to its privacy policies and facebook reformatted its privacy settings to make more programs to promote local news operations. mr. zuckerberg was pushed pretty hard on some ads that he saw. facebook removed those ads. we got a call as mr. zuckerberg was headed to the airport that afternoon. also notable through the global internet for him to counterterrorism, platforms such as twitter, facebook and youtube have been working together to tackle terrorist content and disrupt them. we thank you for that. this is not to suggest the online ecosystem is perfect. it is far from it. these company's could do more to conduct their platform. i think you are all working on that. that may be clear. this hearing should serve as an important reminder to all online platforms that we are watching closely. we want to make sure that we do not harm innovation but when we see issues or identify clear homes to consumers, we do not see online entities taking a appropriate action, we are prepared to act. thank you for having this hearing. this is tough stuff. taking appropriate action, we are prepared to act. they keeper having this hearing. we need to call on them to call take on things we don't like and stay on the right side of the first amendment. it is still protected under the first amendment. if you go too far, we yell at you for taking things done that we like. if you don't take down things we don't like, we yell at you for that. you're in a bit of a box. we know that 230 is an issue we have to revise. we all get the opportunity to revise and extend our remarks throughout this process and clean up our bad grammar. maybe some of that we have is that reporting. we will leave that for another day. ideal back. >> the gentleman yields back. i yield back. >> the gentleman yield back. all opening statements shall be made part of the record. i would now like to introduce our witnesses for today's hearing. ms. monika bickert the vice president of global policy management at facebook. i want to acknowledge and thank you. i know that you are not feeling well today and would like to abbreviate some of your testimony. we thank you very much for coming anyway. i want to introduce dr. join donovan. research director of technology and social change project. also, mr. justin hurwitz. the director of governance and technology center at the university of nebraska college of law. and director of law and economics programs at the international center for law and economics. finally, dr. tristan harris. he is the director for humane technology. we want to think our witnesses for joining us today. we look forward to testimony at this time. the chair will recognize each witness for five minutes to provide their opening statements. i would like to explain the lighting system for those who may not know it. in front of you is a series of lights. the lights will initially be green at the start of your opening statement. the light will turn to you when you have one minute remaining. if you could please begin to wrap up your testimony at that point, the light will turn red when your time has expired. ms. bickert, you are recognized for five minutes. >> thank you members of the subcommittee. thank you for the opportunity to appear before you today. my name is monika bickert. i am the vice president for global policy management at facebook and i am responsible for our content policies. i am a little under the weather today. with apologies, i will keep my remarks short but will rely on the written testimony i cemented submitted. we have an important to play at facebook and addressing manipulation and misinformation on our platform. we have our community standards that specify what we will remove from the site and our relationship with third-party fact checkers through which fact checking organization can rate content as false. we put a label over that content saying this is false information and distribution. under the committee standards, there is some types of this information that we move remove. attempts to suppress the vote or interfere with the census. we announced yesterday a new prong and our policy where we will remove videos that are edited or synthesized using artificial intelligence or deep learning techniques in ways that are not apparent to the average person that would mislead the average person to believe that the video said something he or she did not in fact say. manipulated media that does not follow fall under this is still subject to our fact checking. although deepfakes are an emerging technology, one area where internet experts have seen them is in nudity and pornography. all that violates our policies against nudity and pornography. we would remove it. manipulated videos are eligible to be fact checked by these third-party fact checking organizations that we work with to label and reduce distributional of misinformation. we are always improving our policies and our enforcement. we will continue to do the engagement we have done outside of the company with academics and experts to understand the new ways that these technologies are emerging. we would also welcome the opportunity to collaborate with other industry partners and interested stakeholders. including academics, civil society and lawmakers to help develop a consistent industry approach to these issues. our hope is that by working together with all of the stakeholders, we can make faster progress in ways that benefit all of society. thank you, i look forward to your questions. thank you, dr. donovan, you are recognized for five minutes. >> thank you ranking members for having me today. it is an honor to be invited. i lead a team at harvard kennedy that researches online administration and deception. i have been a researcher of the internet for the last decade. i know quite a bit about changes in policy as well as the development of platforms themselves and what they were intended to do. one of the things i want to discuss today is online fraud. beyond malware, spam and phishing attacks, reddit card scams, there is a growing threat from new forms of identity fraud from enable technological design. platform companies are unable to manage this alone and americans need governance. deception is now a multimillion dollar industry. my research team tracks dangers individuals and groups who use social media to influence brands and other average people. this emerging economy of misinformation is a threat to security. silicon valley companies are profiting from it. key political socialist intrusions are struggling to win back public trust. platforms have done more than just give users a voice online. they have effectively given them the equivalent of their own broadcast station, emboldening the most luscious among us. to recap it with the media manipulation campaign, most malicious among us. newsrooms, health-care providers and law enforcement who are tasked with repairing the damage. we currently don't know the true cost of misinformation. individuals and groups can quickly weaponize social media, causing others financial and physical injury. for example, fraudsters using president trump's image, name, logo and boys have siphoned millions from his supporters by claimant to be part of his reelection coalition. in an election year, donation and scam donation scams should be of concern to everyone. my friends have studied malicious groups, particularly wiser premises to an foreign actors who use social media to inflame racial division. even as these imposters are quickly identify to the communities they target, it takes time for platforms to remove and setting comment content. this can create a great strain on breaking news cycles, turning many journalist into unpaid content moderators and drawing law enforcement for its false leads. on like an indication technologies need regulatory guard best to prevent them from being used reticulated for manipulative purposes. i provided a longer list for ways you could think about technology differently. right now, i would like to call attention to deceptively edited audio and video to drive cliques, likes and shares. this is the ai technology commonly known as deepfakes. what i would like to point out is that we argued that cheap fakes are a wider threat. cheap fakes are a wider threat. magic policy, joe biden, this poses another nancy pelosi joe biden were featured in these videos. the platforms refused to take down this cheap fake. forms like radio towers have provided application power and platforms like radio towers have provided amplification power. these platforms are highly centralized mechanism of distribution. we place the burden on those. right now, malicious actors jeopardized we make informed decisions about who to vote for and what causes the support. we must expand the public understanding of technology by guarding against consumer guarding consumer rights against technological abuse and putting across sector effort to curb the distributional harmful and moshe's content. platform companies must address the power of amplification and malicious content. platform companies must address the power of amplification and distribution seperately from content. platforms and regulation of technology must work, in tandem otherwise work in tandem, otherwise the future is forgery. >> thank you, mr. hurwitz, your recognized you are recognized for five minutes. >> thank you. i would be remiss if it. i did not think might college. i am a lot faster. i wrote a short law review article for my testimony. >> make sure your microphone is on. pull it up. >> i will read the short law review article that a row for you i wrote. i want to make a couple of recommendations. if you want to understand what is at stake with our patterns, start by reading this book " reengineering humanity." their book discusses how modern technology, data analytics combined with highly programmable environment and create an environment where people are programmable. this book will scare you. after you read that book, you should read this book " user-friendly." it discusses the importance and difficulty of designing technology that seamlessly operates in line with user expectation as user-friendly technologies. this book will help you understand incredible power of user-friendly design and fill you with help for what design makes possible along with appreciation for how difficult it is to do design well. together, these books will show you both sides of the coin. dark patterns is something this committee should be concerned about. this committee should also approach the topic with great caution. design is powerful. it is incredibly difficult to do well. efforts to regulate that uses a design could easily harm efforts to use design for good. how is that for having a professor testified? i have already assigned to books and a longer view article of my own for you to read. i will try to discern some of the key ideas from that article in the next three minutes or so. dark pattern is an ominous term. it is a dark pattern itself. it is a term for simple concept. he will behave in people behave in predicable ways. the concern is that sometimes we can be programmed to act against our own self-interest. i have some examples if we can look at the first one. this is only from the internet. look at this for a moment. this is one from the internet. look at this for a moment. who feels manipulated? it is ok to say if you do. this man is making the image feel like the images controlling us. weird stuff. let's look at another example. you can tell from the internet. who feels like this image is manipulative? the previous image was harmless but this hence hints at the power of dark patterns. you missed the first line or the second line until the text pointed it out to you. this has gone from a weird stuff to scary stuff. on the other hand, these centers can be used for good. what if this trip was used to highlight and easily missed but important concern for consumers to pay attention to? this could be beneficial to consumers. design is not mere aesthetic. all design influences how decisions are made. it is not impossible to regulate the design without applauding good design. how much of a problem are dark patterns? websites actually are using them. sometimes subtly, sometimes overly. these tactics can be effective. david consumers to do things they normally would not. i would like to leave you with a few ideas about what, if anything we should do about them. first, dark patterns are used online and off-line. stores use floorplans to influence what people will buy. try canceling a subscription service were returning a product. you will be given a may's of consumer representatives. if these patterns are problem online, they are a problem off-line as well. we should not focus on want to the exclusion of the other. while these tricks are annoying, it is not clear how much they harm consumers or how much benefit that may confirm. mandatory disclosure laws so they have limited these tricks can be used to benefit consumers. most of the worst examples of dark patterns very likely fall within the fcc's authority to regulate practices. the ftc should attempt to use existing authority to address them. if this proves ineffective, the ftc should report to you, to congress on these practices. the industry has been sponsored to these issues and to some extent has been self-regulating. web browsers and operate systems have made very bad practices hard to use. industry standardization and best practices and sell regulations should be encouraged. fifth, regulators self regulations should be encouraged. this is an area well-suited to cooperation. efforts should be rewarded. perhaps more rewarded given the complex of the of these systems. industries should be at the front line of combating them. there is an important goal for regulation to step in. i look forward to discussion. thank you. >> you are recognized for five minutes. >> thank you chairwoman. thank you for inviting me here. i will go off script. i come here because i'm incredibly concerned. i have a lifelong experience with deception and how technology influences people's minds. >> i was a magician as a kid. i know the culture of the people who built these products and the way it is designed intentionally for mass deception. the thing i most want to respond to is we often frame this as we have a few bad apples with deepfakes and we have to get them off the platform. what i want to argue is we have dark infrastructure. this is now the infrastructure by which 2.7 billion people, bigger than the size of christianity make sense of the world. it is the information environment. if someone went along, private companies and build nuclear power plants, all across the united states and they started melting down and said it is your responsibility to have hazmat suits and have a radiation kit, that is what we are experiencing now. the responsibilities being put on consumers when in fact, the infrastructure should be put on the people building that infrastructure. there is specifically to areas of harm i want to focus on. even though when this becomes the infrastructure, it controls all their lives. this is the structure for going to bed, children spend as much time on these devices as they do at the hours of school. the matter what you put on kids brains at school, they have all the hours they spent on their phones. let's take the kids issue. the business model of this infrastructure is not aligned with the fabric of society. how much have you paid for your facebook account recently? youtube? zero. they monetize our attention. the way they get the attention is using the dark patterns or tricks to do it. the way they do it with children is saying how many likes were followers do you have? they get children addicted to getting attention from other people. the use beautification filters that enhance your self image. after two decades, the middle health of teen girls went up 170%. after the 2010, with the rise of instagram. these are your children, your constituents, this is a real issue. we are hacking the self-image of children. the business model, think of it like we are drinking from the flint water supply of information. the business model is polarization. the whole point is i have to figure out and calculate what keeps your attention. there is a recent turn study upturn study. polarization has a home-field advantage in terms of the business model. the natural portion of these platforms is to reward conspiracy theories and the race to the bottom of the brain stem. it is the reason why all of you at home have crazier and crazier constituents. russia is manipulating our veterans. we left the digital border wide open. this is like facebook building the information infrastructure and not protecting it from any bad actors until that pressure is there. this is leading to a kind of information trust meltdown. no one even has to use deepfakes for people to say that must be a fake video. we are actually at the last turning point, kind of an event horizon where we protect the foundation or let it go away. we say we care about gives information but we left technology companies to tell them that the world revolves around likes, cliques and shares. we say we want to come together but we allow technology to divide us. we allow technology copies to degrade our productivity and mental health and jeopardize the development of our future workforce. while i am finishing up here, i just want to say that instead of train to design some new federal agency, some master agency when technology has basically taken all of the laws of the physical world and virtualized it into a virtual world with no laws, what happens when we have no laws for an entire virtualized infrastructure? you can't just bring new agency around and regulate all of the virtual world. let's take our existing agencies and have a digital update as it extends your jurisdiction. i know i am out of time. thank you very much. >> thank you. >> at this time, we will move to member questions, each member will have five minutes to ask questions of our witnesses. i will begin by recognizing myself for five minutes. as chair of the subcommittee, over and over again i am confronted with new evidence that big tech has failed in regulating itself. when we had mark zuckerberg year, i did a review of all of the apologies we have had from him over the years. i am concerned that facebook's latest effort to address misinformation on the platform leaves a lot at. i want to begin with some questions of you, mrs. bickert. the deepfake policy only covers video that has been manipulated using artificial intelligence or deep learning. is that correct? >> thank you. the policy we announced yesterday is confined to the definition we set forth about artificial intelligence being used to make it appear, >> i only have five minutes. >> the video of speaker pelosi was edited to make her look like she was drunk. that would not have been taken down under the new policy. is that right? yes >> it would not fall under that policy. >> as i read the deepfake policy, italy covers it only covers video where a person appears to have said words they did not actually say. it does not cover videos where just the images altered. is that true? image is altered. is that true? >> that is correct. we do have a broader approach to miss information that would put misinformation that would put a label over that says false information and direct people to information from fact checkers. >> i don't understand why facebook should treat fake audio differently from fake images. both can be highly misleading and result in significant harm to individuals and undermined democratic institutions. dr. donovan, in your testimony you noted that deepfakes are more prevalent cheapfakes are more prevalent than deepfakes. should they be treated differently? microphone. >> of course. as if i am not loud enough. what is great about social media is that it makes things smaller. i understand the need for separate policies but the cheap fakes issue has not been enforced. there is uneven enforcement. you can still find that piece of this information within the wrong context in multiple places. the policy on deep face is narrow. one thing we should understand is that presently, there is no consistent detection mechanism for finding deepfakes at this time. i would like to know how they upload >> i want to cut you off at this point because i want to ask mr. harris, our platforms doing enough to stop the dissemination of this information? what can government do? should government be seeking to clarify that if this is illegal off-line, it is a legal online? >> the pie forms are not doing enough. their entire business model is aligned with solving the problem. not aligned with solving the problem. their business model is against the issue. we presented protected children from certain content. when youtube gobbles up that the economy, what do we do? when facebook gobbles up election advertising, we just removed all of the same protections. we are moving from a lawful society to an unlawful virtual internet society. that is what we have to change. >> thank you. i gild yield back. the chair recognizes mr. rogers ms. rogers. >> misinformation is not a new problem. but the speed of information is increasing. the way to address information is more transparency, more sources, more speech, not less. this is not important just in an election cycle but in public health issues, natural disasters or any number of significant events. i am worried about this renewed trend where someone in the government says the parameters and potentially mixed speech and expression. ms. bickert, how does free speech and expression factor into the content decisions at facebook? can you please explain your use of third-party fact checkers? >> thank you. we are very much a platform for free expression. that is why we work with third-party fact checking organizations. we share more information on the surface. we put a label over it. this is false information. here is what fact checkers are saying about this story. we work with more than 50 organizations worldwide. organizations are chosen after meeting high standards for fact checking. >> thank you. as a follow up the total volume of traffic you have, humanize alone can't keep up. artificial intelligence and machine learning have a significant role to identify not only deepfakes but other content that violates your terms of service. you explain a little more to us how you use ai and the potential to use ai to fight fire with fire? >> we do use of combination of technology and people to identify potential information to send to fact checkers. we also use people and technology to try to assess whether something has been manipulated by the policy we released yesterday. with the fact checking program, we use technology to use things like but say that somebody shared a news story and our friends are commenting on that. that is some thing our technology can spot and send that content over to fact checkers. it is not just technology. we have ways for people to flag if they are seeing something they believed to be false. the fact checkers can also proactively choose to rate something they are saying on facebook. >> professor hurwitz, you spoke about how the designs can harm users. >> they can modify created structure. we have heard examples, text placement, the course of interaction with the website. these can be used to guide users into making uninformed decisions or highlight information that users should be paying attention to. this falls into the category of nudges and behavioral psychology. you highlighted some of that testimony. >> can you explain how the ftc can be used to raise dark pattern practices? >> the ftc has a long history of regulating on many practices and advertising practices. false statements, statements that are material to a consumer, making a decision that is, to the consumer harmful to the consumer. they can use at and enact rules to take action against a platform or any identity. >> do you think they are doing enough? >> i would love to see the ftc do more in this area. especially when it comes to rulemaking and in court enforcement actions. these are unknown, uncertain and untested. bringing suits and litigation, that tells us what the agency is capable of. that is something this body needs to know before it tries to craft more legislation or get more authority to an entity. if we already have an agency that has power, let's see what it's capable of. i appreciate you all being here. i appreciate the chair for hosting. >> i think the ranking member who yields back. i recognize mr. blonde for five minutes. palone for five minutes. >> in your various testimonies, you all talked about a variety of technologies and techniques that are being used to deceive and manipulate consumers. to trick people into making certain choices, deepfakes, cheapfakes that show fictional scenarios that look real. algorithms used to keep people's eyes locked on their screen. we know these things are happening. what is less clear is the extent to which these techniques are being used commercially. let me ask dr. donovan, as a researcher focuses on the use of these technologies techniques, do you have access to commercial platform data to have a copperheads of understanding of how this information is conducted? and by whom? >> the brief answer is now. "no." we don't have access to the data as it is. there is all of these limits on the ways you can acquire data through the interface. the other problem is there was a very good faith effort between facebook and scholars to get a bunch of data related to the 2016 election that fell apart. a lot of people put an incredible about of time, money and energy into that effort. it failed around issues related to privacy. what i would love to see also happen is twitter has started to give data related to deletions and account takedowns. we need a record of that so that when we do audit these platforms for financial or social harms, the deletions are also included and marked. even if you can act like a data scavenger and go back and get data, when things are deleted, sometimes they are just gone for good. those pieces of information are often the most crucial. >> should the government be collecting more information about to protect more americans? >> here is an example. unlike other addicted industries, addition is part of the deception. the tobacco industry does not know which users are addicted to smoking. they help a whole industry does not know who is addicted to alcohol. each tech company does know how it people are checking more than 100 times per day. then it was leaving it late at night using it late at night. they are able to audit facebook on a quarterly basis to say how many users are addicted between these ages? what are you doing to make adjustments to reduce that? day, they are otherwise issuing the questions and the responsibility and the resources have to be employed, by facebook. there is a quarterly route between each agency asking questions, forcing accountability with the companies for the areas of their existing jurisdictions. i am trying to figure out is that a way we can scale this to meet the scope of the problem and realising this is happening to 2.7 billion people? >> thank you. this week, facebook released a new policy on how it'll handle deep fakes. ms. bickert, under your policy, deep fakes are videos manipulated through artificial intelligence intended to mislead and are not parity or satire that i get that right are not parody or satire. did i get that right? >> you did. >> the hate speech or abusive behavior, smc very little consistency across the marketplace which leaves consumers at a loss. then we go to dr. donovan. is there a way to develop a common sense advantage problematic practices so consumers are not facing different policies on different websites? >> i think it is possible to create a set of policies, but you have to look at the features that are consistent across these platforms. if they do for instance use attention to a specific post in their algorithms to boost popularity, then we need a regulation around that, especially because unmanned accounts for lack of a better term, are often used to accelerate content and move content across platforms. these are things usually purchased off-platform and the are considered a dark market product, that you can purchase attention to an issue. as a result, there has to be something more broad that goes across platforms but also looks at the features and tries to regulate some of these markets better not built into the platform themselves. >> thank you madam chair. >> thank you. mr. bush on, you are the nest for five minutes. >> i appreciate the hearing and the opportunity to discuss the spread of misinformation on the internet. i am stressing that i am concerned for their efforts to make tech companies adjudicators of truth. in a country founded on free speech, we should not be allowing private corporations, in my view, or for that matter, the government, to determine what qualifies as "truth." potentially censoring a voice because it disagrees with mainstream opinion. that said, i understand the difficulty and the challenges we all face together. concerning this issue, and how we are together trying to work to address it. ms. bickert, can you provide some more information on how facebook might or will determine if a video mislead, what factors might you consider? >> thank you. to be clear, there are two ways we might be looking at that issue. one with regard to the deep fakes policy released yesterday. we will be looking to see specifically what type of or we seeing artificial intelligence and deep learning for that part of the technology that has led to change or fabricate a video in a way that really would not be evident to the average person. that would be a fundamental part to determine whether it is misleading. >> who is the average sorry, your coughing i am playing devil's advocate, who is the average person? >> these are exactly the questions we have in discussing with more than 50 experts as we have tried to write this policy and get it to the right place. >> i appreciate what you're doing. i am not trying to be difficult. >> it is a challenging issue, which is why we think the approach to disinformation, of getting more information out there from accurate sources is effective. >> you stated in your testimony that once a fact checker rates a photo or video as false, facebook reduces the distribution. is there a way for an individual who may have posted these things to protest the decision? >> yes. they can go directly to the fact checker, we make sure there is a mechanism for that. they can either disputed or if they have amended whatever it was an article that was a problem. >> people with good lawyers can dispute a lot of things that the average citizen in southwest indiana who posts something online, there needs to be, in my view, a fairly straightforward process that the average person, whoever that might be, can understand to protest or dispute the fact that their distribution has been reduced. thank you. mr. horowitz, you have discussed that the f.t.c.'s current authority to address this. i am adjusted to hear your thoughts on how consumers can protect themselves. is there only a solution for government action, or can consumer education help highlight these advertisement practices? >> the most important thing for any company, especially in the online context, is the trust of the consumers. consumer education, user education is important, but i think that is fair to say, with condolences perhaps to ms. bickert to, facebook has a trust proble. if consumers and users stop trusting these platforms, they will have a hard time retaining users and consumers. it puts a great deal of pressure. in addition, stability of practices. if we have one dark pardon is to change the users interface so that users don't know how it operates. if we have platforms that operate in predictable ways, that help users become educated and help them understand what the practices are and learn how to operate in this new environment, trust on the internet is different. we are still learning what it means. >> can you talk about how these dark pardon practices took place before the internet and are currently happening in brick-and-mortar stores and other areas? it is said that politicians the that politicians send out. i want to read reiterate that this is not just a problem on the internet, and has been around for a while. >> the practices go back to the beginning of time fundamentally, they are persuasion. if i want to convince you of my worldview, to convince you to be my customer, my friend, i will do things that influence you, present myself to you in ways to get you to like me or my product. if you come into my store and ask for a recommendation, what size tire do i need for my car? i will give you information. >> my time has expired. my point was that we need to in my view, take a holistic approach to this problem and with emergent energy, how we address that consistently and not just target specific industries. thank you. i yield back. >> i recognize congresswoman caster for five minutes. >> thank you. thank you for calling this hearing. the internet and online platforms have developed over time without a lot of safeguards for the public. the government can that we exercise our responsibility to keep the public safe, whether it is the cars we drive, the water we drink, airplanes, drugs for sale, and really, the same should apply to the internet and online platforms. there is a lot of illegal activity being promoted online. where the first amendment does not come into play. i hope we don't done that robert holt, because we are talking about human trafficking, terrorist plots, if he said sales of firearms. now we have these online platforms that control the algorithms that manipulate the public, the deep fakes, these dark patterns, artificial intelligence identity theft. but these online platforms control these algorithms that steer children and adults, everyone, in certain direction. we need to get a handle on that. for example, mr. harris, one manipulative tactic is the autoplay future, now ubiquitous across video streaming platforms. particularly the billions of people that go to youtube or facebook. this feature automatically begins playing a new video after the current video, and the next video is determined using an algorithm designed to keep the viewer's attention. this platform-driven algorithm often drives the perforation of the illegal activities and dangerous the reformation of unit of activities and dangerous conspiracy theories that make it difficult for the average person to try to get truth-based content. i am particularly concerned about the impact on kids. you have raised that, and i appreciate that. you discuss the mental health of kids today, how it really is at risk. can you talk more about context in which children may be particularly harmed by these addiction-maximizing algorithms, and what parents can do to protect kids from being trapped in the youtube vortex, and what you believe our responsibility is as honesty makers? >> thank you so much for your question. it is deeply concerning to me. laying it out, with more than 2 billion users, think of these on youtube as 2 billion "truman shows." each of you get a channel. this fractures reality into 2 billion different polarizing channels, each of which is tuned to bring you to a more extreme view. quick example is, imagine a spectrum of all videos on youtube laid out on one line. on my left side you have the calm walter kwok cronkhite rational side of youtube and on the other side you have, alex jones, conspiracy theories, crazy stuff. the matter where you start on youtube. you could start in the calm section or the crazy. if i want you to watch more. will i steer you that way or that way? i am always going to steer you toward crazy town. imagine taking 2.1 billion humans and tilting it like that. three examples for that. two years ago inch youtube if a teen girl watched a dieting video, it would autoplay anorexia videos. if you watched in an 11 news video. it would recommend 9/11 conspiracy theories. videos about the moon landing, it. would recommend. flat earth conspiracy theories flat earth conspiracy bureau videos were recommended hundreds of times. it might sound funny, but it is serious. i have a researcher friend who studied this. if the flat earth theory is true it doesn't mean that not only that government is lying to you, let all of science is lying to you. think about that for a second. it is a meltdown of all of our rational and systemic understanding of the world. as he said, these are auto place autoplay hacks your brain's stopping cue. as a magician, how do i know if i want you to stop, i put a stopping cue and your brain wakes up. if the water hits the bottom of the glass i can decide, do i want more? but with this, you never stop. you keep filling the water and it never stops. that is how we have millions of kids addicted. in places like the philippines, children watch youtube for 10 hours a day. >> this has significant cost for the public. that is one of the points i want people to understand. dr. donovan says there is an economy of misinformation now. these online platforms are passing along monetizing at making billions of dollars. meanwhile, public health costs and law enforcement costs are adding up to the public and we have a real responsibility to tackle this and level the playing field. >> and by not acting, we are subsidizing our societal self-destruction. absolutely. thank you so much. >> i recognize representative burgess for five minutes. >> thank you. thank you for holding this hearing. i apologize. we have another health hearing going on upstairs one of those days you have to toggle between important issues. mr. hurwitz, let me start by asking you, and this is a little bit off-topic, but it is important. in 2018, the u.s. district court for western pennsylvania indicted seven russians for conducting a physical cyber hacking operation in 2016 against western targets, including the u.s. anti-doping agency in response to the revelation of russians state-sponsored doping campaigns. these hackers or members of the russian military, the gru. the stolen information was publicized by the g.r.u. as part of a disinformation campaign designed to undermine the legitimate interests of the victims. the information included personal medical information about u.s. athletes. do your hackers use fictitious we know hackers use fictitious identities. we are talking about largely the context of deceiving voters and consumers. the harmful potential effect is quite large. in your testimony, you define the practice of using dark patterns is not desirable behavior. can these dark patterns be used to several people and hack them in the broader state-sponsored operations. >> absolutely, they can. it goes to the broader context in which this is happening. where we are not only talking about consumer protection, we are talking about the fundamental architecture, the nature of trust online is different. all the cues we rely on for you to know who i am when you see me sitting here, we have gone through a vetting process, we have identities, telltale cues that you can rely on to the ym and you are. those are different online. we need to think about trust online differently. one example i will highlight that goes to an industry-based solution and the more important nature of how we need to think about this things differently, in the context of political advertising in particular, how do we deal with targeted misinformation for political ads? one approach which facebook has been experimenting with is, instead of saying that you can't speak or you can't advertise, if i target and add at a group of speakers, facebook will let someone else target and. add to that same group may have been experimenting with it. it is a different way of seeing how we deal with and trustworthy information. we, need more creative thinking and research about how do we establish trust in the online environment. >> thank you for those observations. ms. bickert, if i ever doubted the power of facebook, a few years ago, the doubt was completely eliminated. one of your representatives actually offered to do a facebook event to a district i represent in northern texas. it was a business-to-business event, how to facilitate and run your small business more efficiently, and they wanted to do a program. we selected a tuesday morning and i asked how big a venue we should get, thinking maybe 20, 30, and i was told 2000. expect 2000 people to show up. 2000 people on a tuesday morning for a business-to-business facebook presentation, are you nuts? the place was standing room only. it was the power of facebook getting information out there. if i ever doubted the power of facebook, it was certainly brought home to me, how exactly the kind of equity you are able to wield. but recognizing that, do you have a sense of the type of information on your platforms that needs to be fact checked, because you do have such enormous amount of equity? >> yes, congressman. thank you for those words. we are concerned not just with misinformation, that is a good turn, and that is why we develop a relationship we have now with more than 50 fact checking organizations. but we are also concerned with abuse of any type. i am responsible for managing that, whether it is hate speech, threats of violence, child exploitation content, content that promotes eating disorders. any of it violates policies and legal after it and remove it. >> do you feel you have been successful? >> we have had a lot of successes. we are making huge strides. there is always more to do. we have began publishing reports every six months where we actually show across different of these types, how prevalent is it on facebook, from doing a sample, how much content did we find this quarter and remove, and how much did we find before anybody reported to us. the numbers are trending in a good direction in terms of how effective our enforcement measures are, and we hope that will continue to improve. >> as policymakers, can we access that data, for example, the number of anti-vaccine issues that have been propagated on your platform? >> i can follow up with you on the reports we have and any other information. >> thank you. i yield back. chairwoman schakowsky: if i could just clarify that question, is that information readily available to consumers or no? >> chairwoman, the reports i mentioned are publicly available and we can follow-up with any detailed requests. chairwoman schakowsky: i recognize mr. for five minutes of questioning.. >> thank you ma'am madam chair. outside of self reporting, what can be done to help educate a community or communities that may be specifically targeted by all these different platforms? i was wondering, mr. harris, if you could address it specifically? i think a great deal of my constituency, and even on the republican side, our constitution's's are probably being targeted on things like race, income, religion and what have you. if there's anything outside of self reporting that can be done to help educate people more. >> yes, there are so many things here. as you mentioned, in the 2016 election, russia targeted african-american populations. i think people don't realize, every time a campaign is discovered, how do we back-and notified people, all of them affected and say, you were the target of an influence operation? we hear reports every single week of saudi arabia, iran, israel, china and russia, all doing different operations. one of them was going after veterans. many would say it is a conspiracy theory. but facebook is a company that knows exactly who is affected. they can back notify everyone after an operation, letting those communities know what happened, and that they were targeted. we have to move from conspiracy. to, this is real. i studied wiki people up from a called. you have to show them the techniques that were used on them two-minute rhythm. every time does operations you have to show them the techniques that were used to manipulate them. every time these operations happen, we need to teach them. we depend on how many people facebook chooses to hire for those teams. one example on this is that silly of los angeles spends 25% the city of los angeles spends 25% of its budget on security. facebook spends 6%. you can make benchmarks and say, are they solving the problem? they have to push 2 billion figure counts, facebook has fake accounts that they took down. fake accounts. 2.2 billion figure counts. fake accounts. i think they got all of them, would be the line to use here. >> mr. speaker, given the fact that these foreign agents, these foreign actors are turning people specifically by their race, but other economics, , is facebook doing anything to gather information or to look at how specific groups are being targeted? if african americans are being targeted for political misinformation, if whites that live in rural america, if they are being targeted for political misinformation, if people, based on their likes if you can gather information, if these foreign actors could gather information based on things they like, say that you were white and lived in rural america, and you liked one american news, and these other things, into may be more likely to believe in these sorts of conspiracy theories, are you sure some of the things people are sharing on your platform, the likes and dislikes, are not being used as part of that scheme as well? could you answer both of those? >> yes, congressman, thank you for the question. there are, broadly speaking, two things that we do. one is trainings and tools to help people, especially those who might be most at risk, recognize ways to keep themselves safe from everything from hacking, that's scams and other abuse. separately, whenever we remove influence operations under our, what we call coordinated inauthentic behavior, we have removed more than 50 such networks in the past year. every time we do that, we are public about it because you want to expose what we are seeing. we even include examples in our post saying, here is the network, it was in this country, it was targeting people in this other country, examples of the types of posts they were putting on their pages. them or we can shed light on this, the more we will be able to stop this. >> if people are being, targeted specifically if they are white, or specifically targeted because of a certain television or news programming they like, african-americans specifically targeted because russian actors may think daily in a certain way in politics, don't you think that information ought to be analyzed more closely instead of relying on, just using it to the user to be a but to figure all this out, especially when people work on hours and may only have time to digest what they immediately read, and it may not have an opportunity to go back in and analyze something so deeply as far as what you're saying? >> congressman, i appreciate that. i will say, attribution is complicated and understanding the intent behind some of these operations is complicated. we think the best way to do that is to make them public. we don't just do this ourselves, we work hand-in-hand with academics and security firms who are studying these types of things so that they can see, and they will sometimes say, as we take down a network, we have done this in collaboration or conversation with, and we will name the group. there are groups who can look at this and together hopefully shine light on with actors are and why they are doing what they're doing. >> thank you. i yield back. chairwoman schakowsky: i recognize the congressman for five minutes. >> thank you madam chair. the key for very important hearing today. thank you for our witnesses for. before us. it is important for americans to get this information. in 2018 experts estimated that criminals were successful in stealing $37 billion from americans through different scams in the internet, , identity theft friends, family abuse and imposter schemes. last year in my district i had the federal trade commission and the i.r.s. out for a senior event so that seniors could be educated on the threat of these scams and how to recognize, avoid, or recover from them. congress recognizes many of these scams carried out through these manipulative and is able robocalls is able robocalls illegal robocalls. we had just signed the trace act, which i am very glad the president signed over the holiday. what i am glad we were able to get it done, i am still concerned about the ability of scammers who utilize new technologies and techniques like deep fakes and cheap fakes. ms. bickert it, i wanted to pick on you. i appreciate you being here today, especially since you are a little under the weather. i also appreciated reading your testimony last night, i found it very interesting and and lightning. as more and more seniors are going online and joining facebook to keep in contact with families and neighbors and friends, in your testimony, you walked us through facebook judge efforts to recognize misinformation, and what the company is doing to combat malicious actors using manipulative media. is facebook doing anything specifically to protect seniors from being targeted on the platform or teaching them how to recognize fake accounts or scams? >> thank you for the question. we are indeed. that includes both in-person trainings for seniors, which we have done and continue to do. we also have a guide that can be more broadly to stupid that is publicly available, a guide for seniors on the best way to keep themselves safe. more broadly, and as somebody who was a federal criminal prosecutor for 11 years looking at that behavior, this is something we take seriously across the board. we don't want anybody to be using facebook to scam somebody else. we look proactively for that sort of behavior and we remove it. >> a quick follow-up. i think it is important. we have learned that seniors do not report things because they are afraid. i have been taken, i do want to tell my relatives or friends because they are a friend of losing what they might have, noticed on the money side, but how they can get out there. so i think it is important that we always think about our seniors. at the workshop we had in the district last year, the f.t.c. stated that one of the best ways to combat scams is educate individuals on how to recognize illegal behavior so that they can turn that into educating their friends and neighbors. in addition to your private sector partnerships with facebook, would facebook be willing to partner with agencies like the f.t.c. to make sure the public is informed about scammers operating under platform? >> congressman, i am very happy to follow up on all of that. i think it is important for. the public to understand the tools available to keep them safe online. >> we should also consider the ways people are targeted by age, reverse mortgage scams, retirement funding scams, fake health care supplements, will you do retire, it becomes very confusing. you are looking for information, and if you are looking primarily on facebook and posting about it, you might be targeted by the advertising system itself. so even if you are not information-seeking, facebook's algorithms and advertising are giving other third parties information and then serving advertising to seniors. so it is a persistent problem. >> thank you. again, ms. bickert, if i could follow up quickly with my remaining 30 seconds, many of the scammers look for ways to get around facebook's policy, including through the refinement after village and techniques. is facebook dedicating resources to proactively combat scams, instead of reacting after the fact? >> yes, congressman. we are. i have been overseeing content policy at facebook for about seven years now, and in that time, i would say that would say that we have gone from being primarily reactive in the way we enforce our policies, to primarily proactive. we are really going after abusive content and trying to find it. we grade ourselves based on how much we are finding before people reported to us. we are now publishing reports to that effect. >> my time has expired. i yield back. chairwoman schakowsky: budget too many yields back and i recognize mr. o'halloran for five minutes. i want to thank the chairwoman for holding this important telling, meaning for today, i got the concerns of my colleagues, and the types of deceptive online practices that have been discussed today deeply troubling i've continually stress that the top priority for congress should be secure in the u.s. elections you can see dangerous prong sequences if the right tools are not in place, misinformation, online, this is a national security concern. as a former law enforcement officer understand lies meaningless if they are not being enforced. forward to hearing more from our witnesses about the ftc's capabilities and resources to direct is deceptive online practices. doctor donovan you say in your testimony that regulatory guardrails or need to protect users from being misled online. i share your concern about deception and manipulation online including the rise and use of dark patterns. deepfakes and other kinds of bad practices that can harm consumers, can you explain more detail what sort of regulatory guardrails, necessary to prevent, these instances? >> i will go into one very briefly one of the big questions is if i put something online this not an advertisement, i'm just trying to inform my networks the problem isn't necessarily, is there a piece of faith contact out there, the real problem is the scale, being able to reach millions, 2010, 2011 a lot of that upper to a pet form. ,, that wasn't false information, it wasn't meant to deceive people. it wasn't meant to siphon money out of other groups, you weren't really be able to scale donations. it was much harder to create networks of fake accounts, and pretend to be an entire constituency, and when i talk about regulatory guardrails we have to think about distribution differently, that we think about the content, we can also persuade the fears that we have about freedom of expression by looking at what are the mechanisms by which people can break out of there no, networks is it advertising does the ftc have enough, if they don't, know if they don't know why how that money is second of the trump campaign. and that is also number the problem, we have to think about what is transparency and what is it looking like in a meaningful way? he believes that the ftc has adequate authority under section five the ftc to take action against individuals, and companies engaged in deceptive behavior and practices online i do want to point out report that said minds of dollars to hundred something million dollars, finds, they've only collected about 7000 dollars, since 2015,. >> i think you have to look a lot closer what the ftc to has access to and how they can make that information actionable, for example proving that there is substantial injury, if only one group has access to the known cause, or knows the enormity, of a scam, then we have to able to expedite the transfer of data, and the investigation in such a way that were not relying on journalists or researchers or civil society organizations to investigate, i think the investigatory power of the ftc have to also include assessing substantial injuries. >> thank you jennifer, mr. harris to believe the agency has enough resources to respond swiftly and appropriately to address the issues, just want to point out that we flatline them all the time. on the other side industry continues to expand, at an exponential rate. >> that's the issue they are pointing to, the problem creating aspects of technology industry because they operated exponential scales, they create exponential issues, harms, problems, scams, federal. so how do you have a small body reach such large capacity, this is what i'm thinking about how can we have additional updates for each of our different agencies, you already have jurisdiction over for this public health, children, or scams, or deception, and just have them ask questions, the thinner force deposit technology companies, to use their resources to calculate, so the objection, said the goals for weather going to do in the next quarter. >> the chair and i will recognize for five minutes. >> thank you madam chair, thank you all of you for being, here this is truly important, extremely important all of our, citizens want to start by saying that, we talk about deepfake and cheap fake to be that somewhat black and white i can understand it but when we talk about different patterns i think that's more gray, i grew up in the south, we had a grocery store chain, many of you may be familiar with, i was heard that the way that they got their name, i tried to fact check, i couldn't find it. i was with the way they got their name was ranger store so when you went in you had to kind of wiggle all the way around, before you could get back out so that you'd buy more things. it was like a pig way going through the farmyard or something. and they came up with piggly wiggly and that's. marketing all the sudden were negotiator, in the czech, outline of all these things up there that they're trying to get you to buy you could argue that they are impulse buys but then again you could also make the argument that when you get home you say jeez, i wish i would've gotten that the course, is to our wish it would've gone these batteries are band-aids, or whatever, how do you differentiate, between what is harmful or what is beneficial? >> great question, because it is gray, and as i said previously dark patterns, the term itself is a dark pattern intended to make us think about all this is dark there are some clear categories that clear lies clear false statements where we're talking about classic deception, this pretty straightforward, when we're talking about more behavioural notches, becomes much more difficult, academics of study nudges, for decades at this point, it's hard to predict when they're going to be affected, when they're not going to be defected, the ftc context, the deception standard has a requirement, so there needs to be some demonstration that practices material to the consumer harm does, additional framework, if we don't have some sort of demonstrable harm requirement and the cause of connection, they're causation as an basic element of any claim, if you don't have some ability to tie the act to the harm, you are in a dark waters for due process. >> do you think we should be instruction death to secret to conduct research on this? just to what is going on here. >> one formation is good and, saving the ftc is conducting some hearings already. i think that investigation is in powerful, both ftc understands what they should be doing so they could use information to establish rules, which reality is different called to establish, definitely condition the. roll rulemaking process that makes it easier to substantiate the enforcement action subsequently. even to respond in part to previous question, to the extent that when we get ftc's for, powers even if it doesn't lack this, it is enforcement authority, and support this, and say look we are seeing this practice, it is problematic, we don't have the authority, can you do something about it. it's perhaps is valuable and can give it the power, perhaps his body will take direct action, or perhaps the platform and other entities will say, oh wow, the jake is up. we can change your practices. . >> i've started this topic for also about a decade, you asked what's different about this. we have the pick going through you have that last-minute purchase items. two distinct things that are different. the first, is that is an infrastructure, we live by. when you talk about children waking up in the morning, and you have auto play. as not like the supermarket where occasionally go, there and mix and purchase them at the very end of it last one moment, the one little micro situation of deception, a marketing. it is okay. in this case, we have children who are spending ten hours a day, imagine supermarket, using ten hours a, day you wake up in that supermarket, that's a degree of intimacy, and scope in our lives. a suffers thing. second thing is the degree of asymmetry, between the purse waiter and the persuading, in this case you have somebody knows a little bit more about marketing, is arranging the self space, things in the top, i, level or to the bottom, level as one very small amount of asymmetry, in a case of technology, we have a super country where, pointed at your brain, facebook news, he'd sitting there, and using the vast resources of 2.7 billion peoples behavior, to calculates the perfect thing to show you next. and not be discriminate but weather is good, view whether to others trustworthy, credible. it knows way more about your weaknesses than you know by yourself. and the degree of asymmetry is far beyond. >> and you want the federal government to control that? >> i think we have to ask questions about when the nerves that degree of asymmetry. about intimate aspects of your east misses. and the business model is to exploit that asymmetry that it's psycho therapy the knows everything about your weaknesses. and uses it for a for profit business model. >> it could be as the other, way it could be used in the example earlier, of what if otto play shifting us towards conspiracy theories. that's a dark pattern that's bad. what if we said it was using us to shift the other way, to the light, to greater education. if we say, auto play is bad. then we are taking both of those options off the table. it can be used for good. and the question that you asked, about how do we differentiate between good uses and bad. that's the question. >> thank you madam chair, i yield back. >> thank you madam chair, thank you so much for holding this very important hearing. unfortunately i think most americans don't understand how important this is to every single one of, us especially to our children, and future generations, there's an app, tiktok, question mark, is it a deepfake maker? five days ago, to crunch reported that the parent company of the popular video sharing app, may have secretly built a deepfake maker. although there is no indication that tiktok intends to actually introduce this into the prospect of despair technology be made available on such a massive scale. and on a platform that is so popular with kids, raises a number of troubling questions,, why this news may be concerning? >> yes thank you. de faces a really complex issue, to think about how the governments are responding,, the fabric of truth interest in this, or society if you can host a deep fake account labeling it clearly as deepfake you can actually go to jail. you're not saying if you post a deepfake you go to laos. you're saying if you post it without labeling it you go to jail. you can imagine a world where facebook says, if you post a deepfake without labeling it, you actually may be suspender account for 24 hours. so that you feel, and we label your count other people who see her count. >> a lot a, second my colleague on the other side of the aisle just warned, quote and you want to have the government control this? >> you just gave an example, of our privacy industry could in fact create deterrence, to bad behavior. not the government, but actual industry. >> okay go ahead. so that's right, they can create and that's the point instead of using these, ai whack-a-mole, where the engineers a facebook, how many engineers at facebook speak the 22 languages in india? they are controlling the information infrastructure not, just for this, country but for every country, they don't speak the languages of the country that they operated, and they are automating, that instead of trying to use ai where they are missing everything going by, yes they make money investments that used to celebrate, that the people this better than it was before, created individual frankenstein where there's four more content advertising variations of tax lines etc. you can't create problems way beyond the scope of your ability to address them be like creating power plants them without actually having a plan for security. >> getting back to that, where they could suspend your cat 4:24, hours with all due respect in that example facebook might lose a little bit of revenue as well as the person, that they are trying to deter from that action, is likely going to lose revenue as well. >> that's correct, maybe that's unacceptable costs, maybe it's acceptable when you look at it intellectually and honestly but when you look at it and whether or not private industry is going to take it upon themselves to actually impact their shareholders revenue that's where has a place in space to get involved and st. proper actions and reactions the to be put in place, so that people can understand that you cannot, you shouldn't just look at this from a center, motive because in this world, sometimes negative actions are more profitable for somebody that positive good actions this one of the things that some fortunate you talk about languages around the world, the number one target in my opinions for these bad actions for both financial gain. and also the tearing down of the fabric of a democracy to mean reaches, we are by foreign away the large economy. the biggest group on the planet. they have tensions against our interest. >> it's a national security, i see this is the long term, polarization dynamic, accelerating towards civil war level, things hash takes civil war is coming. there's a colleague saying, if you can make a trend, you can make it, you planting these suggestions, getting people to even think those thoughts, because you can manipulate the architecture. or profiting, were subsidizing our own self destruction, government doesn't say that these things can just be. prosecutable >> thank you to the witnesses, and thank you mr. harrison, we run a, time i wish and more, time thank you. >> gentleman yields back and now we recognize mr. soto, for five minutes. >> thank you madam chair, it's been my experience that a law is, able to travel faster on the internet did the speed a life while the truth always goes the sense of a snail's pace suppose that's because of the algorithms to bc, our store with deepfakes, and cheap fakes, we know for near times, that defamation a public fever require actual malice some of these just appear to be malicious on their, face, appreciate the labeling that facebook is doing, the something that we are honoring in our office, as well, why wouldn't facebook to simply take down the fake pelosi video? >> thank you for the question, our approach is to give people more information so that if something is going to be in the public discourse, they will know how to assess it and how to contextualize it. that's why we work with the black checkers, i will say that in the past six months, it's feedback from academics, civil society groups, that have led us to come up with stronger warning screens. >> but that be labeled under your current policy now as falls that video? >> i'm sorry which. video >> with the fake fussy video be laced as paul's under your new policy? >> it was labeled falls at the time, we think we could have gotten not to fact checkers faster, and we think the label we couldn't put on it would've been more clear. we now have the label for something that has been raided false, you have to click through, it sowed actually obscures the, image and it says false information, and it says this has been raided falls by flock checkers, you have to click through, and you see information from the fact checking source. >> thank, you are 2016 there was a fix trump rally put together by russians in florida. complete with hillary clinton in a person, and a fake bill clinton quote a fake rally be created today through facebook, in the united states by the russians in your existing technology? >> the network that created that, was fake and inauthentic and we removed, and we were so defined, it i think our enforcement has gotten a lot better, and as a data point for that, in 2016 we removed one, such network. this past year we removed more than 50, that's a global number all over the, world these are organizations that are using networks some, pick some real in attempt to obscure who they are, or to push false information. >> did happen again right now? >> our enforcement is not perfect, however it made huge strides in that is a dramatic increase in. members of those who work with, security firms in academics who are starting this to make sure we are seeing on top of. us >> what do you think facebook's using other social media plans to prevent the spread of lies across the internet? >> i'm sorry could you repeat. >> that what you think facebook and other social platforms duty is to prevent this threat of lies across the platforms? >> i can speak for facebook. we think it's important to be able to connect safely and with authentic information. my team is responsible for both. there is approach to misinformation, where we try to get people, label content that is false, and get them accurate information, and then, there is everything that we also do to remove abusive content that violates our. standards after donovan, i saw you reacting to the fake trump rally aspect, could that still happen, are now under existing safeguards and social media? >> yes. the reason why it can still happen, is because of the platforms is now turning into a bit of a vulnerability, for the rest of society, it's dangerous about events like that, is the kind of research we do we are often trying to understand, what is happening online, what happens when the wires, interaction between the wires and the weeds, when people start to be mobilized, start to show up places, that to us is one order of magnitude, there is much more dangerous. >> but you think we should be doing as government to help? >> there are ways in which i think what people are using particularly advanced features, group pitchers, there has to be added transparency about who what when where, those events are being organized, by there have been instances in facebook very recently, where they've added transparency pages, but it's not always clear to the user who is behind, what page. and for what reason. they're launching a protest, what is dangerous though, is the actual constituent show, a previable show up, as fodder, for this and we have to be really careful that they don't staged different parties, like they didn't texas across the street from one another. we don't want to have, manipulation that creates this serious problem for law enforcement, as well as others in the area. >> now recognize, congresswoman for five minutes. >> thank you very much, i really appreciate the witnesses here, especially on this really important issue, -- convene a group of business owners, to discuss consensus of law. chain block chain technology could have interesting applications in the communication space. including new ways of identity verification, this technology is unique and that it could help a decentralize faction. rather than lying on one company to serve as a sole gatekeeper. i would like. i have a lot of questions. i would like some to sink answers to this. mr. ottoman, do you see value in promoting impartial decentralize methods of identity verification, as a tool to come back the spread of misinformation? >> i think in limited cases yes, especially around purchasing of advertising, which is allowing you to break out of your known networks, and to reach other people, in the advertising, features i'm interested in learning more about this consensus on definition because i also think it might help us understand what is a social media company, what are their, how do we define their broadcasts, related to the media, media company as well as the other kinds of products that they, built i think it would also get us a lot further and understanding what it is we say when we say deepfakes. >> communications our recently announced that we supporting research to support locked in technology. to support a more accurate online use environment the tire panel yes or no sufficient. do you believe they should be keeping -- with europe? as far as walked into think european commission is, supporting research to advance locked in technology to support a more accurate online news development, do you believe that the u.s. should be keeping pace with europe in regard of block chain. >> this is not my area. >> okay. >> donovan. more research will help us understand. >> around the world many are .... >> it's not my area but i know that china's working on a decentralized currency, could get all the countries, with road plans, if they switch the global currency to their decentralize currency, that's a major national security threat, it will change the entire world order, think a lot more work has to be, done against china gaining currency administration. >> it's not disputed facts, reform by america's intelligence algae that russia interfered in our 2016 elections targeted in prolong online campaigns. we know that russia is ramping up for 2020. the american voters are once again exposed to july. 's falsehoods and misinformation. designed to sow division in our democratic process. it's got to see the recent funding included 425 million introductions security grants. this is only part of a much larger solution. to protect the most fundamental function our democracy. social media companies need to take clear, forceful action against florida temps. mr. harris, how have the various elections charges involving 2016, 2016 election cycles. >> what i'll say is i think that we need a mass public awareness campaign, to give it as a come juul vaccine, back in the 1940s, one of the committee for national moral, the institute for propaganda analysis that actually did an awareness campaign about the threat of fascist propaganda, you probably seen the video backing away from 1947, called don't be a sucker, that is looking at a guy scouting fascist propaganda, somebody starting to nod, some attacks and won the children's is sun that is fascist propaganda, and he says actually saw this is a deep threat. national security threat to our country. we could have another mass public awareness campaign, we can have the help of the technology companies to collectively use their distribution, to distribute the inoculation campaign. everybody actually knew the threat of the problem. >> first the panel agree with mr. harris on this? >> on that note that runs the risk of being called dark matter. if the platforms are starting to label certain content in certain ways, there's across current discussion to note there. >> we don't count any solutions now but i appreciate it all runner time thank you. >> congresswoman i would just point to the ads library, that we have put into place over the past few years, which is really brought an unprecedented level of openness to the five for advertising so people can now see who's behind an ad, who paid for, and we verify the identity. >> difficult for most people out there to really do, that unless is right in front of them, gather this, happening then we should have much more exposure about this. >> thank you. >> i recognize mcnerney for five minutes. >> i think the chair, i think the witnesses, you're testimony seven very helpful appreciated, i have to say, with big power comes big responsibility, and disappointed by opinion that facebook hasn't really stepped up to that responsibility. back in june i sent a letter to mr. soccer burger was joined by democrats on the committee in this letter we noted were concerned about the potential conflict of interest between facebook's bottom line and addressing the misinformation on his platform. six months later our meet very concerned that facebook is putting its spot on the line ahead of addressing misinformation. facebook's content modernization policy, modernization policy states that content that depicts or discusses subjects in the following categories may face reduced or restricted modernization and misinformation is included in the list is troubling that your policy doesn't simply ban misinformation. do you think there are cases where in misinformation can and should be monetized? >> please answer yes or no. >> congressman, no if we see somebody that is intentionally sharing misinformation and make this clear in our policies they will lose the ability to monetize. >> that sounds different than what is in your company stated policy. but the response or received from facebook to my letter failed to answer many of my questions. for example, i asked the question that was left unanswered and i like to give you a chance to answer today. how many project managers does facebook employ? who's full-time job it is to address myths information? >> congressman i don't have the number of pm's i can tell you that across my team, our engineering teams and our content review teams this is something that is a priority, building that, network of the relationships of more than 50 fact tracking organizations as something has taken the at four seven abrupt teams across the company. >> does that include software engineers? >> it does because for any of these programs you need to have an infrastructure that can help recognize when something might be misinformation, allow people to report when something might be misinformation, get things over to the fact checking organizations. >> i'm going to ask you to provide that on formation how many full-time who's including suffer engineers are employed in that? >> were happy to try to follow up announcer. >> another question that was left unanswered, is on average, from the time of contents post on facebook's platform, along to take for facebook to flag, suspicious content to third-party fact checkers, and for them to review the content, and facebook to take remedial action once the content review is completed? >> congressman the answer depends, this could happen very quickly, we actually allow fact shocking organizations to proactively rate content they see on facebook. do you think it's fast enough to keep deepfakes from going viral or other misinformation? >> the rate something proactively than it happens instantly, we also use technology and use the reporting to fly content to them, and we often see that they will write it very quickly. >> moving on i'm very concerned that facebook is not prepared to address this information on the proper mid events of this year's election, would you commit to having a third party audit conducted by june 1st of facebook's practices four commenting the spread of disinformation on its platform for the results of this ought to be made available to the public? >> congressman we're very happy to answer any questions about how we do what we do, we think transparency is important, and were happy to follow up with any suggestions that you have. >> we would either request a third party, audit we are not talking about civil rights audit independent third-party audit, to be conducted at facebook by june 1st. >> congressman, again, we are very transparent about what our policies and practices are and we are happy to follow up, with any specific suggestions. >> it is going to say the third-party fact checking services are massively understaffed, underfunded, and a lot of people are dropping out of the program, and the amount of information flowing through that channel is far beyond the capacity to respond to more or less fact arcing isn't really the relevant issue, i think if you look at the clearest evidence of this, is facebook's own employees wrote a letter to mark tracker work saying you are undermining our election integrity efforts. with your current political ads policy, i says it all to, me that letter was lead to the new york times even a month ago, i think those people because they are close to the problem they do the research koreas they understand how bad the issue is. were on the outside, we don't actually know, it's almost like they own the satellites to show us how much pollution are, as we actually know on the outside, all we can do is trust people like down the, inside saying this is far less than what we like to, do they've still not updated their policy. >> thank, you are yield back. >> i recognize congresswoman angle for five minutes. >> thank you madam chair, i think all of you for being here today, subject that really matters, it does to all of us. in the past we've treated what little protections people had online as something that is separate from those we have in our day today lives. offline. the line between what happens online, and off line, is virtually nonexistent gone are the days where we can separate one from the other. millions of americans have been affected by data breaches and privacy abuses, the numbers are so large that you can't even wrap your head around them. i've talked to members here and they don't even at times understand what has happened, or how people have collected data about us. three sources to help folks protect themselves after the fact are desperately needed, but what's really happening is that the cost of filler to protect sensitive information, is being pushed on millions of people who are being breached, and not trying to do anything. it is a market expert analogy, and that is where the government, step in, go to the pharmacy to fill out prescription, is soon in the medicine you're gonna get is gonna be safe this not gonna kill you. signaled side you soon hear you breathe you soon. it's going to be safe. we are trying to make it that way, that's because we have laws that protect people from a long list of known market extra nowadays, and the burden isn't placed on their ability to find out, is the medicine you're taking okay? say? and is the area breathing queen? were still working on that. it's one we've identified. it shouldn't be any different for mark sucker burke extra nowadays that additional accent a letter to facebook today which has a lot of questions that didn't lend themselves to answer here, so i hope that they will be answered but. i like to get yes or no answers from the panel, on the following questions, i'm going to start this way with mr. harris because we always start with you or speaker will give you a little moment and thank you for being here even though you are sick. do you believe that the selling of realtime cell phone location without users consent constitutes a market extra now itty? >> i don't know with a specific one but the entire surveillance capitalism system produces vast harms that are on the balance sheets of society whether that's the mental health of children, the manipulation of elections the breakdown of polarization. >> it's a market extra knowledge-y. >> based on the economic definition of actuality it is not however can be problematic. >> i'm in line with. >> gusts i'm not economists what we do think use of consent is very important. >> second question, yes or no, do you believe that having 400 million pieces of personally identifiable information made public including passport numbers, names, addresses, and payment information, is a market extra now? >> itty similarly on the classic economic definition i don't know if that will qualify, is deeply alarming. >> same answer. >> agreed. >> same answer. >> are you all agreeing with mr. harris? . >> same answer as i gave previously, by the economic definition. >> do you believe that having 148 million individuals who personally identifiable information including credit cards numbers drivers license and social security numbers made public is a market extra? now >> i can see it sort of like a an oil spill same answer, you don't think it's a? problem >> i don't think it's a problem, i wouldn't categorize it as an extra analogy. >> you don't think we should protect people from? that >> that's not what i'm saying. i've rely on a more technical definition of extra knowledge. >> doctor donovan. >> it's an incredibly, important problem,. >> miss bicker. >> yes i would echo doctor. >> do you believe having the date of 87 million users taken and he used foreign affairs political purposes is a market extra knowledge mr. harris? >> i think it's the same answers before, i break into your house instill yourself, and sell it on the black, market that is not extra naughty, but it was a problem. >> i wouldn't characterize it as a break-in, it was facilitated by the features built into the platform, and it's a huge problem. >> again, i think that user control consent is very important. >> last question about a, time enough to be fast. >> finally do you believe it simply asking whoever took it to please delete it, is an appropriate response? >> very hard to enforce that, once the data is out, there it's out there, if you love new world where now you see what it is just out there. >> that never should have been allowed in the first place. >> again, i think it is very important to get people control over their data, emerging our best to make sure that we are doing that. >> right of time, thank you madam. chair >> thank you, the gentlewoman yields and i recognize myself for five minutes, thank you to the chairwoman, in her, absence and thank you to the panelists, this is a vitally important conversation that we are having, but i've notices that technology is outpacing policy, and the people. and so we are feeling the impacts of ornamental, health were feeling it in our economy, were feeling it in a form of government, so this is a very important conversation, i would like to start with a few questions, they're kind of off the dark patterns in really do deal with the idea of deceptive and manipulate practices, it is just a basic, question it's a yes or no, and it's really surrounding the platforms that we have, and the ability for people with disabilities to use them, are each of you are any of you familiar with the term, universal design? >> vaguely yes. >> yes, vaguely. >> vaguely yes. >> okay so there is a lot of vaguely years, and i don't have time to really talk about what universal design, is bright think as we look at how people are treated in our society. universal design and look at people with disabilities is one of the areas i like to follow up with you. on an hourly to turn my time to discussion about, drug patterns, every single member of congress, and every one of our constituents is virtually everyone has been affected by this in some. respect everyday weathers giving upper location data, manipulated into purchasing products that we don't need. providing sensitive information, leading to scams, that many of us are targeted, and the failure to address dark patterns harms individuals, one of the areas that of deeper concern to me, is the challenge for us as a society as a whole. cambridge analytica in need of itself is a great example, of all of us, it wasn't just an individual that was harmed, it was our society we can see some of the remnants today, i heard someone say to me yesterday, they hoped that this hearing was not just a hearing. but a real wake up call. i wake up call to our country, so by first question is do the hero do you believe in the oversight of dark patterns, and the other deceptive am initiative practices, discuss here are well suited, for industry self regulation? >> no, absolutely not. >> i would like to follow up with miss backer, does facebook have a responsibility to ensure transparency to its users? >> we definitely want, that in yes i think we are working on new ways to be transparent, all the time,. >> the section 2:30 the commission educations decency act provide immunity to facebook over these issues? >> section 2:30 is an important part of my team being able to do, what we do it gives us the ability to proactively look for abuse and remove. it >> but does it provide immunity's? . >> a snow this is? section. >> section 2:30 does provide a certain protection, most important from my standpoint is the ability for us to go after abuse on our platform, it's also important mechanism for people who use the internet to be able to post, to platforms like facebook. >> one of my concerns, here for asking that question is, we having a big conversation about the balance of freedom of speech, i want to turn back to mr., harris how do you think that we in congress can develop more agile and responsive response to the concerning trends on the internet? you mentioned the digital update of federal agencies, can you talk a little bit about that? >> just as you said the problem here is that we have, quoting, the progress we have pillar thick emotions, evil institutions and accelerating god-like technology, when you're stealing will goes a light year behind your god-like technology, your system, crosses we need a digital update, the some of the existing institutions helping services, fcc, ftc, you can imagine every category society and say what we already have jurisdiction about these areas, and that's doing a come up on the planet with the digital up to date is going to be. direct relationship or every quarter there is not a, and a set of actions that are gonna be taken to mediate these harms. it's the only way i can see scaling this, back to take a whole new drone federal agency, to take away from these issues. >> i know i'm running out of time, rather question really was going to be shake the role that you see of government, i think we're having a lot of conversations here about the freedom of speech and also the role of government. so as a follow-up, i would like to have conversations with you about what you see, the role of government versus self regulation and how we can make something happen here. the bigger concern is for us to make sure that we are looking at this both as an individual level, but also as a society. and i yield my time, and recognize the gentlewoman from new york. miss clark. >> i thank you very much madam chair, and i think our ranking members, i think our panelists for their expert witnesses here today, deepfakes currently pose a significant an unprecedented threat. now more than ever we need to prepare, for the possibility that foreign adversaries will use deepfakes to spread disinformation and interfere in our election. which is why we have successfully secured language in the nda, notification be given to congress if russia or china seeks to do exactly this. >> but deepfakes has been it will be used to harm individual americans. we have already seen instances of women's images being superimposed on fake pornographic videos, and these tools become more affordable and accessible, excuse me as these tools become more affordable and accessible, we can expect deepfakes to be used to influence financial markets, discredit people and even inside violence. that's why we've introduced the first built to address this threat the deepfakes accountability act. which requires creators, to label deepfakes, as altered content, update our identity theft statue for digital and precedent impersonation. and requires cooperation between the government and private sector to develop protection technologies. and now working on a second bill, specifically the online platforms how they deal with deepfake contents. doctor donovan, cheap fakes, we've often talked about deepfakes with technology footprint of the content is changed, but can you talk a bit more about the national security of she fix such as the pelosi video? where footage is simply altered instead of entirely fabricated >> one of the most, effective political uses of a cheap vegas to draw attention, and schiff the entire media narrative, towards self false climb. so particularly, what we saw last week with the biden video was concerning because you have hundreds of news, to dispute something, a video, and platforms have allowed it to scale, to a level where the public is curious, and are looking for that content, and then are also coming into contact with other nefarious actors and networks. >> what would you say could be done by government to counteract it? >> there has to be, you moving very much in the direction i would go to, where we need to have some labels, we need to understand the identity throughout that it poses,, and that there needs to be more broad cooperation between governments, as well, the cost journalism is very high, because all the energy and resources that go into tracking, backing, and getting public information out there. i think the platform companies can do a much better job of preventing that harm. . by looking at content when it seemed to, go wildly out a scale with the usual activity of an account. to proactively look at things where if you do see an uptick of 5000 views on something, maybe there needs to be a pro tactic content moderation. >> facebook as a founding member of a deepfake technology, detection is only partially a technology issue, we also need to have a definition of what fake is, and a policy for which kind of fake videos, are actually acceptable, one of the things that, excuse me last summer you inform congress that facebook is working on a precise definition, for what constitutes deep fake. can you update us on those efforts, especially in light of your announcement yesterday, and specifically how do you intend to differentiate between legitimate deepfakes, such as those created by people for entertainment, i malicious once? >> thank you for the question, the policy that we put out yesterday is designed to address the most sophisticated, types of manipulative media, it's fits within the definition of what many academics would call deepfakes, so that we can remove it. beyond, that we do think it's useful, to academia, to have common definition so we are all talking about the same, thing those are common conversations we've been part of in the past six months, will continue be a part of, those and we are hoping that working together with industry, will be able to come up with a confidential definition. >> should the attention of the deepfake or the subject matter be the focus? . >> should the intent of the deepfake, or the subject matter be the focus? . >> you from our standpoint, is often difficult to tell intent what we are talking about, many different types of abuse, but also specifically deepfakes or misinformation, that's why if you look at our policy definition, it is a focus on intent as much as what the effect will be on the viewer. >> i thank you for allowing my participation today madame chair. >> that concludes the questioning, i have things i want to put into the record, it may be the ranking member does is, well i did want to make an unending comment, i would welcome, we had a discussion that took us to the grocery store here now in a new world who are discussing that is, hugely bigger when they, say facebook is a committee of more than two billion people spending countries, cultures, and languages across the globe, i think that there is, now such an incredible and justified distressed, of how we are being protected. we know in the physical world, we do have laws that apply, and the expectations of consumers are, those will be somehow there to protect us, but in fact they aren't. we live then in the virtual world in the digital world in a place of self regulation. and it seems to me, that has not satisfied expectations of consumers. we don't have institutions right now, even when they have the authority, they have the funding, have the expertise, thinking of the federal trade commission just as an example, to do what needs to do. but we don't have a regulatory framework, at all, i think that hopefully in a bipartisan way we can think about, and it may include things just as the kinds of honesty or talk about mr. harris which would not necessarily create new regulatory laws, but may need to, to me, that's the big takeaway, today. when you have communities, that are bigger than any country in the entire world, that are essentially making decisions for all of the rest of us. we know that we've been victimized, but the government of the united states of america doesn't need to respond, that's my take away from this hearing. i appreciate hearing from the ranking member. >> i think the chair, and i think everyone for being here, i think it is important that we all become more educated, i want to bring to everyone's attention that the ftc is holding a hearing on january 28th, i think that is a report that all of us are participating, getting better educated, and taken steps as we move forward. clearly this is a new era. on one hand we can celebrate that america's lead the world, innovation and technology, proves are wides in many ways, there's also another side we need to be looking, at and making sure that we are taking the appropriate steps, keep people safe and secure. >> we will continue this import discussion and continue to become better educated, thank you chair. >> thank you very, much i would like to insert into the record, the unanimous consent to enter the following documents into the record, a letter from the sack after, letter from our street, paper written by jeffrey muscling, of the or street institute, a report from the project on facebook. i seek unanimous consent. without objection, and so ordered >> let me think of all over witnesses today, we had good participation from members despite the fact that there were other hearings going on, i remind members that committee rule save ten business days to submit additional questions for the record, to be answered by the witnesses, and hopefully in a reasonably short time. we hope there will be prompt answers. and at this time the subcommittee is adjourned.

Related Keywords

New York , United States , Texas , Iran , Florida , Laos , China , Illinois , Czech Republic , Indiana , Russia , Michigan , Jordan , India , Israel , Italy , Pennsylvania , Cambridge , Cambridgeshire , United Kingdom , Venezuela , Americans , Czech , America , Chinese , Russian , Russians , American , Poston Facebook , Joe Biden , Los Angeles , Veronica Escobar , Alex Jones , Joseph Pulitzer , Pelosi Joe Biden , Robert Holt , Tristan Harris ,

© 2024 Vimarsana

comparemela.com © 2020. All Rights Reserved.