comparemela.com



recognition technology. it is clear that despite the private sector's expanded use of technology, it's just not ready for prime time. during this hearing, we will examine the private sector's development, use and sale of technology as well as its partnerships with government entities using this technology. we learned from our first hearing on may 22 of 2019 that the use of facial recognition technology can severely impact american civil rights and liberties including the right to privacy, free speech, and equal protection under the law. we learned during our second hearing on june 4th how federal, state, and local government entities use this technology on a wide scale yet provide very little transparency on how and why it's being used or on security measures to protect sensitive data. despite these concerns, we see facial recognition technology being used more and more in our everyday lives. the technology is being used in schools, grocery stores, airports, malls, theme parks, stadiums, and on our phones, social media platforms, doorbell camera footage, and even in hiring decisions, and it's used by law enforce. . this technology is unregulated at the federal level resulting in questionable and even dangerous applications. on december 20th -- on december 2019, the national institute of standards and technology issued a new report finding that commercial facial recognition algorithms misidentified racial minorities, women, children, and elderly individuals at substantially higher rates. i took forward to discussing this study with dr. romine who is joining us today. i also look forward to hearing from our expert panel from academia industry and the advocacy community on recommended actions and policy makers should take into account to address potential consumer harm based on these findings. our examination of facial recognition technology is a bipartisan effort. i applaud ranking member jordan's tireless and ongoing advocacy on this issue. we have a responsibility to not only encourage innovation, but to protect the privacy and safety of american consumers. that means educating our fellow members and the psq)ican people on the different uses of the technology and distinguishes between local subjective identification and surveillance uses. that also means exploring what protections are currently in ivil rights, re currently in consumer privacy, and data security and prevent misidentifications as well as providing recommendations for future legislation and regulatijjip r(t&háhp &hc% in that vein, i would like to announce today that our committee is committed to introducing and marking up common sense facial recognition legislation in t$e very near future. and our hope is that we can do that in a truly bipartisan way. we've had several conversations, and i look forward to working together towards that goal. i now recognize the distinguished ranking member jordan for his opening statement. >> thank you, madame chair. we appreciate your willingness to work with us on this legislation. talk about as well. facial recognition is a powerful new technology that is being widely used by both government agencies and private sector companies. its sales have experienced a 20% year to year growth since 2016 and the market is expected to be valued at $8.9 billion by 2022. increasingly local state and federal government entities are u utilizing facial recognition under the guise of law enforcement and public welfare but with little to no this allows tracking of our movements, patterns, and behavior. all of this is currently happening without legislation to balance legitimate government functions with american civil liberties. that must change. and while this hearing is about commercial uses of facial recognition, i want to be very clear i have no intention of unnecessarily hampering technological advancement in the private sector. we appreciate the great promise this technology holds for making our lives better. it's already improved data security and protects consumers. the urgent issue we must tackle is reigning in the government's unchecked use of this technology when it impairs our freedoms and our liberties. our late chairman elijah cummings became concerned after learns it was used as surveil protests in his district related to the death of freddy gray. he saw this as encroaching on freedoms of speech and association. this issue transcends politics. the idea of american citizens being tracked and cataloged for merely showing their faces in public is deeply troubling. it is imperative that congress understands the effects of this technology on our constitutional liberties. the invasiveness of facial recognition technology has already led a number of localities to ban its government agencies from buying or using "igital facial recognition for any purpose. this trend threatens to create a patchwork of laws that will result in uncertainty and may impede legitimate uses of the technology. unfortunately, this is not an issue we should leave to the courts. facial recognition presents novel questions that are best answered by congressional policy making which can establish a national consensus. the unique government-wide focus ofrts committee allow uses to address facial recognition technology here at the federal level. we know a number of federal government agencies possess facial recognition technology and use it without guidance from congress despite serious implications on the first and fourth amendment rights. we must understand how and when federal agencies are using this technology and for what purpose. currently we do not know even this basic information. because our committee has jurisdiction over the entire federal government's use of emerging technology, we must start by pursuing policy solutions to address this fundamental information. it is our intention as well to introduce legislation. we're trying to work with both sides here trying to work together that will provide transparency and accountability with respect to the federal government's purchase and use of this technology and this software. i'm pleased to be working with my colleagues across the aisle on the bill that would address these questions and i want to thank you madame chairwoman and i look forward to hearing from our witnesses today and thank them for being here. >> thank you mr. gordon. before we get to the witnesses, i woul" like to make a unanimous consent request. i would like to insert into the record a report from the aclu which found that amazon's recognition technology misidentified 28 members of congress as other individuals who had been arrested for crimes including john lewis, a national legend, a national civil rights leader. so i would like to place that into the record. and i'd also like to mention that three members of this committee were misidentified, mr. gomez, mr. clay, and mrmr. mr. desaulnier. they were misidentified, shows this technology is not ready for prime time along with 11 republican members of congress. so, i would now like to recognize my colleague mr. gomez who has personal experience with this for an opening statement. >> thank you madame chair. first, this is the committee is holding its third hearing on this issue. and up until two years ago, this issue was not even on my radar until the aclu conducted this test which falsely matched by identity with somebody who committed a crime. then all of a sudden my ears perked up. but i had no doubt that i was misidentified because of the color of my skin than anything else. so, as i started to learn and do research on this issue, my concerns only grew. i found out that it's being used in so many different ways, not only in law enforcement, at the federal level, the local level, but it's also being used when it comes to apartment buildings, when it comes to doorbells, when it comes to shoppers, when it comes to a variety of things. but at the same time, this technology is fundamentallyú flawed. for somebody who gets pulled over the police in certain areas is not a big deal. in other areas, it could be mean life or death if people think you are a violent fellen. so, we need to start taking this serious. this issue probably doesn't rank in the top three issues of any american out in the united states, but as it continues to be used and it continues to have issues, there will be more and more people who are misidentified and more and more people who are questioning if their liberties and freedoms are starting to be impacted through no fault of their own, just some algorithm misidentified them as somebody who committed a crime in the past. so, this is something that we need to raise the alarm and that's what these hearings are doing in a bipartisan way to make sure that american public "t"oesn't stumble into the dark and all of a sudden our freedoms are a little bit less, our liberties are a little bit less. so, we will start having these important discussions ain a bipartisan way to figure out how and what can the federal government do, what can congress do, what is our responsibility. and with that, i appreciate the chair's commitment to legislation. i also appreciate the ranking member's commitment to legislation because i know that this issue is a tough one and it only can be done in a bipartisan way. with that i yield back. >> i now recognize mr. meadows of north carolina for an opening statement. >> thank you madame chair and -- the both of you, thank you for your leadership on this important issue. two things i would highlight. certainly we know mr. gomez, and we know that there is certainly no criminal background that he could ever be accused of being involved with. so, i want to stress that his character is of the utmost as it relates to even us on this side of the aisle. and i say that in jest because one of the things that we do need to focus on -- and this is very important to me. i think this is where conservatives and progressives come together. and it's on defending our civil liberties. it's on defending our fourth amendment rights. and it's that right to privacy. and i agree with the chairwoman and ranking member and mr. gomez and others on the other side of the aisle where we've had really good conversations about addressing this issue. to focus only on the false positives i think is a major problem for us though because i could tell you, technology is moving so fast that the false positives will be eliminated within months. and so i'm here to say that if we only focus on the fact that they're not getting it right with facial recognition, we've missed the whole argument because technology is moving at warp speeds. and what we will find is not only will they property -- my concern is not that they improperly identified mr. gomez. my concern is that they will properly identify mr. gomez and use it in the wrong manner. so, for the witnesses that are here today, what i would ask all of you to do is how can we put a safeguard on to make sure that this is not a fishing expedition õat the cost of our civil liberties because that's essentially what we're talking about. we're talking about scanning everybodies' facial features, and even if they got it 100% right, how should that be used? how should we ultimately allow % our government to be involved in that? and so i'm extremely concerned that as we look at this issue that we have to come together in a bipartisan way to figure this out. i think it would be headlines on "the new york times" and "waárr'gton post" if you saw members of both parties coming % to an agreement on how we are to address this issue. i'm fully committed to do that. madame chair, i was fully committed to your predecessor. he and i both agree that the very first time where this was brought up that we had to do something, and i know the ranking member shares that. and so i'm fully engaged. let's make sure that we get something -- and get something done quickly. and if we can do that, you know, because i think if we start focusing, again, on just the accuracy, then they're going to make sure that it's accurate. and they're -- but what standard should we have the accuracy there? should it be 100%? should it be 95%? i think when mr. gomez was actually identified, the threshold was brought down to 80%. you're going to get a lot of false positive when that happened. standards and make sure our government is not using this in an improper fashion, and with that, i yield back. >> i thank the gentleman for his statement. i would now like to introduce the witnesses. we are lived to have a rich diversity of expert witnesses on our panel today. brenda leong is the director of counsel at the future of privacy forum. dr. charles romine is the director at the information technology laboratory of the national institute of standards and technology. meredith whitaker is the so founder and codirector of the ai now institute at new york university. president for data protection and innovation technology and innovation foundation. and jake paerker is the senior director for government )elations at the security industry association. if you would all rise and raise your right happennd, i'll begin swearing you in. do you swear or affirm that the testimony you are about to give is the truth, whole truth, and nothing but the truth so help you god. let the record show that the witnesses all answered in the affirmative. thank you and please be seated. the microphones are very, very sensitive, so please speak directly into them. and without objection, your written testimony will be made part of our record. and with that, ms. leong, you are now recognized for five minutes. >> thank you for the opportunity to testify and for considering the commercial use of facial recognition technology. this is an important challenge. the future of privacy is a non-profit organization that serves as a catalyst for leadership and scholarship in support of emerging technologies. we believe that the power of information is a net benefit to society and that it can be appropriately managed to control the risks to individuals and groups. biometric systems such as those based on facial recognition technology have the potential to enhance consumer services and improve security but must be designed, implemented, and maintained with full awareness of the challenges they present. today my testimony focuses on establishing the importance of technical accuracy in discussing face image based systems, considering the benefits and harms to individuals and groups, and recommended express consent as the default for any commercial use of identification or verification systems. understanding the specifics of how a technology works is critical for effectively regulating the relevant risks. not every camera-based system is a facial recognition system. a facial recognition system creates unique templates. these databases are used then to verify a person in a one to one match or identify a person in a one to many search. if a match is found, that person is identified with greater or lesser certainty depending on the system in use, the threshold and settings in place, and the operator's expertise. thuj recognition systems involve matching two images. without additional processing, they do not impute other characteristics to the image. there's been a great deal of confusion on this point in contrast to facial recognition or emotion detection software which attempts to analyze a single image and impute characteristics to that image. these systems may or may not link data to particular individuals but carry their own significant risks. accuracy requirements and capabilities for recognition and characterization systems vary with context. the level of certainty expectable for verifying an individual's identity when unlocking a mobile device is below the standard that should be required for verifying that an individual is included on a õterrorist watch list. in addition, quality varies widely among suppliers based on liveness detection, the diversity of training data sets, and the thoroughness of testing methodologies. the high quality of the systems at the top of the rankings reflect the ability to meet these goals. for example, the most recent reflects accuracy. but the best systems achiq+ed results across demographic groups with variations that were undetectable. however, the real harms arising from inaccurate and recognition and characterization systems cannot be ignored. individuals are use facial recognition to open their phones, access bank accounts, or organize photos. organizational benefits include more secure facility access, enhanced hospitality functions and personalized experiences. new uses are imagined all the time. concerns about real time @r(t&% surveillance societies have led individuals and policy makers to express significant reservation. the decision by some municipalities to ban all facial recognition systems by government agencies reflects these heightened concerns. the ethical considerations for where and how to use these systems exceed systems and the regulatory challenges are complex. legal liability systems are complex. when considering the scope of industries hoping to use this technology from educational and financial institutions to retail establishments, the potential impacts on individuals are mind boggling. as with many technologies, facial recognition applications offer benefits and generate risks based on text. tracking online preferences and personalizing consumer experiences are features some people value but others strongly oppose. tying these options closely to the appropriate consent level is essential. while fpf prefers a privacy bill to protect all sensitive data including biometric data we recognize congress may choose technology specific bills. if so, our principles provide a useful model particularly in requiring the default for commercial identification or verification systems to be opt in, that is express affirmative consent prior to enrollment. . epgs exa exceptions should be few and narrow. thank you for your attention and your commitment to finding a responsibilities regulatory approach to the use of facial recognition technology. >> thank you. the chair now recognizes dr. romine for five minutes. >> chairwoman, ranking member, and members of the committee i'm chuck romine, director of the information of institute of standards and technology known as nist. thank you for the opportunity to discuss nist roles in standards and testing fjt((rp& recognition testing. in the areas of biometrics, nist has been working with public and private sectors since the 1960s. biometric technologies provide a means to establish or verify the identity of humans based on characteristics. it compares an individual's facial features to available images for verification or identification purposes. nist work improves the accuracy, quality, usability, interoperability, and consistency of identity management systems and ensures that united states interests are represented in the international arena. nist research has provided state of the art technology benchmarks and guidance to industry and u.s. government agencies that depend upon biometrics recognition technologies. nist face recognition vendor testing program or frvt provides technical guidance and scientific support for guidance and recognition of face technologies to various u.s. government and law enforcement agencies including the fbi, dhs, cvp. áhe nist interagency report 8280 released in december 2019 quantified the accuracy of facial recognition algorithms for demographic groups defined by sex, age, race, and country of birth for one to one and one to many identification search records. it found empirical evidence in face recognition algorithms that nist evaluated. the report distinguishes between false positive and false negative errors and notes that the impacts of errors are application dependent. nist conducted tests to quantify demographic differences for 189 facial recognition systems from 99 developers with 18.27 million images of 8.49 million people. these images came from operational databases provided by the state department, the department of homeland security, and the fbi. i'll first address one to one verification application. their false positive differentials are much larger than those related to false negative and exist across many of the algorithms tested. false positives might present a security concern to the system owner as they may allow access to imposters. other findings are that false positives are higher in women than in men and higher in the elderly and young compared to middle agq" adults. regards race, we measured higher false positive rates in asians and african-americans relative to caucasians. there are higher false positive rates in native americans and pacific islanders. these apply to most algorithms including those provided in the europe and the united states. some algorithms developed in asian countries there was no such dramatic difference in false positives in one to one matching between asian and caucasian matches for algorithms developed in asia. one possible connection and an area for research is the relationship between an algorithm's performance and the data used to train the algorithm itself t. i'll now comment on one to many search algorithms. again, the impact of errors is application dependent. false positives in one to many search are particularly important because the consequences could include false accusations. for most algorithms, the nist study measured higher false positive rates in women, african-americans, and particularly in african-american women. however, the study found that some one to many algorithms gave across these specific demographics. some of the most accurate algorithms fell into this group. this last point underscores one overall message of the report, different algorithms perform differently. the indeed all of our frvt report note wide variations in recognition ak e si across algorithms and important result from the demographic study is that demographic effects are smaller with more accurate algorithms. nist is proud of the positive impact it has had in the last 60 years on the evolution of biometrics capabilities. with broad expertise both in laboratories and in successful collaborations with the private sector and other government agencies, nist is actively pursuing the standards and measurement research necessary to deploy interoperable, secure, reliable, and usable identity management systems. thank you for the opportunity to testify on nist activities and face recognition and identity answer any questions that you have. >> ms. whitaker. >> thank you for inviting me to speak today. my name is meredith whitaker and i'm the cofounder of the ai now institute. we're the first university research institute dedicated to studying the social implications of artificial intelligence. i also worked at google for over a decade. facial recognition poses serious dangers to our rights, liberties, and values whether it's used by the state or private actors. the technology does not work as advertised. research shows what technology companies won't tell you, that facial recognition is often inaccurate, biased, and error prone. there's no disclaimer to warn us that the populations facing societal discrimination bear the brunt of facial recognition failures. as dr. romine mentioned the most recent nist audit confirmed some systems were 100 times less accurate for black and asian people than for white people. this isn't the only problem and ensuring accuracy will not make it safe. facial recognition relies on the mass collection of data. persistently track where we go, what we do, and who we associate with. over half of americans are n a law enforcement ns are facial recognition database and businesses are increasingly using it to surveil and control workers and the public. it's replacing time (sjt)j at job sites, keys for housing units, safety systems for schools, security at stadiums, and much more. and we've seen real life consequences. a facial recognition authentication system used by youtub uber failed to recognize trans gender users. it is being used to make judgments about peoples' personality and their worth. this raises urgent concerns especially since the claim you can automatically detect interior character based on facial expression is not supported by scientific consensus and recalls discredited pseudoscience of the past. most facial recognition systems company who license them to governments and businesses. the commercial nature of these systems prevents meaningful oversight and accountability hiding them behind legal claims this means that researchers, lawmakers, and the public struggle to the answer questions about how the technology is being used. this is especially troubling since facial recognition is usually deployed by those who already have power, say employers, land lords, or the police to surveil, control, and in some cases oppress those who don't. in brooklyn, tenants pushed back against the land lord's plans to replace key entry with facial recognition raising questions about biometric data, racial bias, and the real possibility that it can be used to harass and evict tenants. many have turned to standards for assessment and auditing. these are a wonderful step in the right direction but they are not enough to ensure that facial recognition is safe. using narrow or weak standards as deployment criteria risks companies allowing to assert their technology is safe and fair without accounting for how it will be used or the concerns of the communities z will live with it. if such standards are positioned as the sole check on these ould function to on these mask harm instead of prevent it. õfrom aviation to health care, t is difficult to think of an industry where we permit companies to treat the public as experimental subjects deploying untested, unverified, and faulty technology that has been proven to violate civil rights and to amplify bias and discrimination. facial recognition poses an existential threat to democracy and shifts the balance of power between those using it and the populations on whom it's applied. congress is abdicating its responsibility if it continues to allow the technology to go unregulated. as a first step, lawmakers must act to halt the measures of recognition and sensitive domains by both government and commercial actors. if you care about the overpolicing of communities of color or gender equity or the constitutional right to due process and free association, then the secretive unchecked deployment of flawed facial recognition systems is an issue you cannot ignore. facial recognition is not ready for prime time. congress has a window to act and the time is now. >> thank you. the chair now recognizing daniel castro for five minutes. >> thank you. chairwoman, ranking member, and members of the committee, thank you for the invitation to testify today. there are many positive uses of facial recognition technology emerging. airlines are using it to help travellers get through the airports faster. banks are using it to improve security, helping reduce financial fraud. hospitals are using it to verify the right patient receives the right treatment preventing medical errors. there's even an app that says it uses facial recognition on doings and cats to help find lost pets. americans are increasingly familiar with commercial uses of the technology because it's now a standard feature on the latest mobile phones. it's also being integrated into household products like security cameras and door locks. this is why a survey found a majority of americans disagreed with limiting the use of facial recognition. in nearly half opposed strict limits if it would prevent the technology being used to stop shoplifting. over the past year i've also seen head lines suggesting that facial recognition technology is inaccurate, inequitable, and invasive. if that was true, i would be worried too. but it isn't. here are the the facts. first there are many different facial recognition systems on the markets. some perform much better than others including in accuracy rates across race, gender, and age. notably, the most show little to no bias. these systems continue to get measurably better every year and they can outperform the average human. second, many of the leading companies and industries responsible for developing and deploying facial recognition have voluntarily adopted privacy guidelines. these include voluntary standards and consensus based "t"eveloped for the community. while the private sector made significant progress on its own, congress has an important role. i would like to suggest several key steps. first, privacy regulation, pre-empt state laws, and establish basic rights. while it may be appropriate to require optic consent for certain uses such as in health care or education it won't always be feasible. for example, you will probably not be able to get sex offenders to register. in addition to federal law should not private right of action because that would raise cost for businesses and these costs would be passed on to the consumers. second, congrqáj should direct nist to systems to reflect more real world commercial issues including cloud based systems and infrared systems. this also should continue to report performance metrics on race, gender, and age and diverse facial images data set for training and evaluation purposes. third, congress should direct usa to develop standard for any the government ensures. this would ensure the government doesn't waste tax dollars. fourth, congress should fund deployments of facial ec are in addition systems in government, for example using it to improve security in federal buildings and expe"ite entry for government workers. congress should continue to support federal funding for research as part of the government's overall development. one of the key areas is computer vision and the u.s. government should continue to invest in this technology, especially as china makes gains in this field. sixth, congress should consider legislation to establish a warrant to track movements including when they use geolocation data. finally, congress should continue providing due oversight of law enforcement. that should include ensuring that any police surveillance or political and it should include scrutinizing racial disparities in the use of force among communities of color. congress also should require the department of justice to develop best practices for how state and local authorities use facial recognition. this should include recommendations on how to disclose, when law enforcement will use the technology, what sources will be used and what the data retention will be. congress should always consider the impact of new technologies and ensure proper guardrails in place to protect society's best interest. in the case of facial recognition technology, there are many unambiguously beneficial opportunities to use the technology such as allowing people who are blind who suffer from face blindness to identify others. so, rather than imposing bans or moratoriums, congress should limit the potential misuse and abuse. thank you again and i look forward to answering any questions. >> thank you. jake parker is recognized for five minutes. >> good morning chairwoman, ranking member, and distinguished members of the committee. my name is jay parker, director for community relations. sia is a trade association representing businesses that provide a broad range of security products for the government, commercial, and residential users. our members include many of the leading developers of facial recognition technology and many others that offer products that incorporate with this technology for a wide variety of applications. sia members are developing tools for consumers and enterprise users. it is because of the experience our members have in building and deploying the technology we're pleased to be here today to talk about how it can be used consisáent with values. we firmly believe that all technology products including facial recognition should only be used for a lawful, ethical, and non-discriminatory purposes. that way we as a society can have confiden(q that facial recognition makes our country safer and brings value to our everyday lives. so, in commercial settings, facial recognition offers tremendous benefits, allowing individuals to prove their identity in order to enter a venue, board a commercial airplane, perform online transactions, or seamlessly access personalized experiences. in addition, companies are using the technology to improve the physical security of their property and their employees against the threat of violence, theft, or other harm. additionally, as you know, government agencies have made effective use of facial recognition for over a decade to improve homeland security, public safety, and criminal investigations. one important example of the use of the technology is to identify and rescue trafficking victims. it's been used in 40,000 cases in north america, identifying 9,000 missing children and over 10,000 traffickers. saw a social media post about a missing child. after law enforcement used facial recognition, the victimized child was located and recovered. another notable success story, nypd detectives last year used the technology to identify a man who sparked terror by leaving rice cookers at the subway. usual facial recognition technology along with human review, detectives were able to identify the suspect within an hour. the chief detective was quoting as saying to not use this technology would be negligent. combined with more effective software can provide security enhancing tools for unlocking mobile phones to securitying infrastructure, facial recognition technologies abound. in all applr(páions sia transparency as the foundation for governing the use of facial recognition technology. it should be clear when and under what circumstances the technology is used as well as the processes governing the collection and storage of related data. we support sensible safeguards that promote transparency as the most effective way to ensure technology without unreasonably restricting tools. sia does not support moratorium or blanket bans on use of the technology. as the committee works on the prose posal &s mentioned earlier requiring greater accountability, we encourage private sector developers to be brought into the conversation to present real world views. we hope you'll also remember the important role the government plays in supporting biometric technology improvements. at a minimum, congress should provide nist with the resources it needs to support the expansion of these efforts. we believe any effort specific to commercial use makes sense in the context of national data privacy policy. many include biometric innovation and this is tech neutral. in the meantime, we encourage our members to play an active role in providing end users with the tools to use the technology responsibility. in order to make this come to fruition, sia is developing risk principles. this hearing comes on the heals of recent nist study. it's important to note that biometric technologies working closely with nist for decades handing over the technology and allowing the government to test it and post the results. it's improving every year to the point where the accuracy is reaching that of automatic fingerprint (áu(árájz which is viewed as the gold standard for identification. the most important significant take away from this report is that it confirms facial recognition technology performs far better across racial groups than reporte" before. according to nist data, only 4 out of 116 algorithms tested using the mug shot database had false match rates more than 1%. we are committed to continuing to provide technology so that all users can be comfortable with it in the transparency and privacy policy surrounding its deployment to improve the technology. on behalf of sia, thanks for the opportunity to appear before you today. >> thank you. dr. romine i would like to ask you about the study that you released last month and that you mentioned in your testimony, and i would like to ask unanimous consent to place that study in the record without objection. we know that facial technology continues to expand in both the public and private sectors. but your new study found that facial recognition software misidentified persons of color, women, children, and elderly individuals at a much higher rate. and in your study, you evaluated 189 algorithms from 99 developers. your analysis found that false positives were more likely to occur with people of color. is that correct? >> it is correct for the largest collection of the algorithms, that's corrq(u >> and your report also found that women, elderly individupls, and children were more likely to be misidentified by the algorithms. is that correct? >> that is correct for most algorithms. >> now, in women's health they used to do all the studies on men. when you were doing the studies were you doing the tests on men's faces as a pattern? or using women's faces? >> we were able to represent a broad cross section of demographics. >> okay. di" these disparities in false positives occur broadly across the algorithms that you tested? >> they did occur broadly for most of the algorithms that we tested. >> and your study states, and i quote, across demographics, false positive rates often vary by factors or tend to be beyond 100 times, end quote. these are staggering numbers, wouldn't you say? how much higher was the error rate when the algorithms were used to identify persons of color as compared to white individuals? >> so, as we stated in the report, the error rates for some of the algorithms can be significantly higher from 10 to 100 times the error rates of identification for caucasian faces for a subset of the algorithms. >> and what was the dichbs in the misidentification rate for women? >> similar rates -- >> 10 to 100? >> 10 to 100 -- i'll have to get back to you on the exact number. but it's certainly a substantial difference. >> what about black women? is that higher? >> black women have a higher the same algorithms that we're discussing than either black faces broadly speaking or women broadly speaking. black women were even -- had differentials that were even higher than either of those two other demographics. >> so, what were they? >> >> what were they? >> substantially higher on the order of 10 to 100. >> misidentification as we all know can have very serious consequences for people when they are falsely identified. it can prevent them from entering a plane or a country, it can lead to someone being falsely accused or detained or even jailed. i am deeply concerned that facial recognition technology has demonstrated racial racial, gender, and age bias. we should not rush to the ploy unless we understand the potential risks and mitigate them. your study provides us with valuable insight into the current limitations of this technology, and i appreciate the work that you have done and all of your colleagues on the panel today that have increased our understanding. i would now recognize the ranking member. no, i am going to recognize the gentlelady from north carolina, mrs. foxx. now recognized for questions. >> mr. parker, how competitive is the racial -- the facial recognition market? >> it is extremely competitive, because of the advances in technology over the last couple of years, the dramatic increase in accuracy in the last three to five years, combined with advances in imaging technology have really made the products more affordable and therefore there has been more interest from consumers and more entry into the market from competitors. >> to what extent do the companies compete on a((u)acy and how could a consumer know more about the accuracy rates of facial recognition? >> they do com(ete on accuracy. in this program this program plays a really helpful role to providing a benchmark of accuracy. the companies are competing with each other to get the best scores on those tests. the best scores on those tests. they make the results available to their customers. there is an important distinction as well. in this testing, you have static data sets they are using that are already there, whereas those are not necessarily the same type of images you see in a deployed system. other types of testing need to be done in a fully deployed system to really determine what the accuracy is. >> what private sector best practices exist for securing facial images and the associated data, such as face print, templates, and match results in these facial recognition technology systems? >> i mentioned earlier, sia's developing a set of best use practices based on the fact that many of our members have produced best practices when they work with their customers on to implement. it would accomplish privacy goals. i have a couple of examples, but one of the most significant here is many of these products already have built into them the ability to comply with status privacy laws in europe, the gdpr laws in europe. this has to do with encrypting photos, encrypting any kind of personal information associated with it, securing channels of communication and the server and the server in the device, as well as procedures for looking at someone's information and be able to delete that if requested and tell someone what information is in the system. >> could you summarize succinctly some of the best practices that exist for protecting that personally identifiable information that is incorporated into it? is it too complicated a system to explain here? is there something we can have to read? >> i will be happy to provide some more details later, but certainly one of the most important things is encryption of data if there is a data breach. is it is important to point out the face template is what the system uses to make a comparison between two photos. by itself, that is basically like the digital version of that data isprint. by it compromised, it is not useful to anyone. it has to be proprietary software that can read it. >> i have been reading a lot about the difference between europe and us in developing these kinds of techniques recently. a number of state and international policies are impacting how information is collected are impacting how information is collected. many directly address privacy information. how have commercial entities conformed these new legal structures? >> what we are seeing is that we are adapting here and already building interest to products in anticipation for some of because it is good practice, many of the things the gdpr requires, but we anticipate a similar framework in this country at some point. being proactive in building some of those things in. >> thank you. i yield back. >> i now recognize the gentlewoman from the district of columbia. ms. norton is now recognize for questions. >> we are playing catch-up. the way to demonstrate that most readily is what the cell phone companies already are doing with this technology. private industry appears to be way ahead of where the congress or the federal government is. the interesting thing is, they are appears consumers mayk and already be embracing facial recognition in their own devices. because the latest, as they compete with one another, almost all of them have incorporated facial recognition already in their latest mobile products. if one does it, the other is going to do it, all of them already doing. you can unlock your cell phone by scanning your face. now, the public thinks this is, i suppose they are right, this is a real convenience instead of logging in numbers, and they have gotten accustomed to cameras. i remember when cameras were first introduced in the streets and people said that is terrible. of course, there is no right to privacy once you go in the streets. but to talk about my cell phone, there is a lot of private information in there. according to recent reports, this technology is not full proof. that is my concern. that for example a simple photograph can fool in some circumstances. unauthorized individuals can get into your cell phone, and any sensitive information you have in there, people store things like their email, banking and the rest of it. do you see problems that are already there? our companies now integrating facial technology in devices like this. it looks like the public sees convenience, and i don't hear anybody protesting it. would you comment? >> thank you very much. i think that is in an excellent question. we do see the cases completely and many applications, with phones being the most personalized oneès that people have, that makes a good example of variations in place in the market of the different ways facial recognition technology is being used. for example, in your phone, i'm going to use apple as the example. this is my best understanding. apple takes a representative picture of your face using both imaging in orrder to prevent things like using a photo or using another person, and it takes it at a level of detail that stands up to about a one in 10 million error rate, which is a pretty substantive level for something that is in fact "t tp)t of a two factor process, you have to have the phone and know who's phone it is and have their face and match whatever standard there is. facial recognition can identify someone off that a video feed is a different level of application and certainly should be considered and regulated in a very different way than that. i do think we see those things being used in different ways already, some of those have started to have some blowback on them in things like the criminal justice system. that is where it has gotten people's attention and said, where are the places when he took draw those lines and say it should not be used here. maybe at all, port of it is, it should be used in limited and if it is, it should be used in limited and regulated ways. >> "t qátáháhe average consumer hae any way to confirm? should they have any way to confirm that these cell phone manufacturers are in fact storing their biometric or all áhe data on their servers? what should we do about that? consumer knowledge. >> the average consumer does not and indeed many researchers, many lawmakers don't, because this technology as i wrote about in my written testimony is hidden behind trade secrecy. this is a corporate technology that is not open for scrutiny and auditing by external experts. i think it is notable that while we reviewed 189 algorithms for their latest report, amazon refused to submit the recognition algorithm. their recognition algorithm. they said they could not modify it to meet the standards, but they are a multibillion-dollar company and have managed other pretty incredible feats. what we see here is it is at the facial recognition companies'discretion what they do and don't release. oftentimes, they release numbers not validated or not possible to validate by the general public. we are left in a position where we have to a positihese companies, but we don't have many options to say no or to scrutinize the claims they make. >> thank you. the gentleman from louisiana is now recognize for questions. >> thank you. i would like to ask unanimous consent that the statement of chief james craig, the detroit police department is written testimony that his written testimony be entered into the record. that his written testimony be entered into the record. i would also like to recognize and thank our esteemed col league for his opening statement, the freedoms and liberties, resisting and curtailing manifestation of big brother, reducing and controlling the size and scope of federal powers, and i want you to know good sir, the public and party welcomes your transition. >> madam speaker, facial recognition technology is emerging technology. of course, it is produced by private entities. law-enforcement does not produce their own technologies. it is common and it is here. it will get better as the weeks and months move forward, it should be no surprise to us that the effective percentages of identification are using a new technology will increase as time moves forward. and there is more coming. there is total coming. it measures thenology specific physical features of individuals. their gait, length of their arms, etc. what we should see is a means by which to make sure that big brother is not coming. i have a background in law enforcement, and recognition technology has become manifested in many ways. you have license plate readers being used from sea to shining sea. there are readers in police units that drive around and read license plates. we have an eye out for a particular vehicle of a particular color, that is human recognition. we see that vehicle, we have a license plate reader reading every plate we pass. if it is expired, or the associated drivers license to that registered vehicle, if a person is wanted. if the guy that walks up the building and gets in that vehicle appears to be a suspect we have identified or have a warrant for, there is going to be some interaction. come to manifest in the this is a technology that has evolved and will come to manifest in the next 20 years, and it has gotten very effective. facial recognition technology is completely, is completely common. we use digital records from crime scenes, pictures, the best we can get from surveillance video, surveillance cameras at the business or whatever was available to us. we would pass these images on and have the shifts see these images. the other pretty good. somebody would recognize i get. this is the beginning. the odds are pretty good someone would recognize that guy. this is the beginning of an investigation that helps law enforcement cultivate a person of interest for us to speak to. it can never be a time there are two things we stand against. this is where the ranking member and i have had discussions at length. both of us stand against live streaming images of free americans as they travel and at businesses to and fro across america through some database and suddenly the police shows up to interview the guy, but solving a crime, we are already using digital images to the best of our ability to solve crimes. and every american should recognize that this is an important tool. the chief's written statement that i asked be submitted, has several examples of the use of this technology. i have three specific questions which time will not allow. we have had several hearings about it. i think the majority party's focus on it. i hope we can come together with effective legislation that both allows the technology to move forward the techool and protects therd freedoms and privacy of the american citizens we serve. i yield. >> thank you. i now recognize the gentleman from massachusetts for questions. >> i want to thank you and the ranking member for collaborating on this hearing and approaching it in the right way, i think. first of all, i want to thank the witnesses for your testimony. it is very helpful. as i understand it, i am a little skeptical, they tell me that the facial recognition you use on your phone with the iphone, that the way iphone says the way they handle this is in this year in this case, it stays in the phone and does not go to a server at this point. i sort of question whether they have the ability to do that in the future. i think there is probably a greater danger that they will get facial recognition right. it is what happens when they have all this data out there, whether it is law-enforcement or private firms. we had a massive data breach by suprema, a big biometrics collector, 100 million people i think, i'm sorry, 27 million people in that breach. then customs and border patrol, 100,000 people that they identified along with license plates, that was breached. the concern is, once this information is collected, it is not secure. that is a major problem for all of us. i want to ask sosq specific questions about tiktok, a chinese company, purchased tiktok, a chinese company, purchased by a chinese company. the kids love it. in the last 90 days, one billion people have downloaded it in the u.s. and europe. it is owned by the chinese government. and -- i'm sorry, it is located in beijing. under chinese law, a recent national security law in china, they have to cooperate with the chinese government. we already see it happening. you don't see much about the protests in hong kong in the app, they are already exercising censorship on tiktok. it would have to cooperate with china. that is a national security concern for us. it is under review. the other situation is apple phone, the iphone, and our efforts because of the pensacola shootings, we are trying to get apple to open up the iphone so we can get that information. if you step back, it is sort of what we are worried about china doing what we are doing with apple, we are trying to get access to that data just like china can get all that data from tiktok. how do we resolve the dilemma? is there a way we can protect our citizens and others who share that data or have their identity captured? how do we resolve that so we use it to the better fit of society? benefit of society? >> i think the bottom line really is balancing the understanding of the risks associated with policy decisions that are made. those policy decisions are outside of nist's purview, but with regard to the debate on access to apple and encryption, we know that in the government and broadly speaking, there are two ... >> if it's not in your discipline. let me ask miss whitaker the same question. >> i think the short answer is that we do not have the answer to that question. we have not done the research needed to affirmatively answer that we cannot protect people's privacies in a complex geopolitical context. i think we need more of that research and we need clear regulations that ensure these are safe. >> i think we need to unabashedly support encryption. consumers have control over the data and third parties don't. that is the way consumers control the information and keep it out of the hands of the government. >> i exhausted my time. thank you for your courtesy. >> thank you so much. the gentleman from texas is now recognized for questions. >> thank you for your work on this topic. this is an extremely important topic. we are going through the birth pains in this technology. -- primarily developed by the government or commercial entities? >> it's mixture of both. in some cases, especially with federal agencies, they developed their own systems over time. i think increasingly, it is commercial solutions. >> commercial solutions. what's been the industry's response to the mr. report? >> from my perspective the industry has been involved from the outset. they have been very supportive of the efforts we have undertaken over the last 20 years. so it's generally a very positive thing. the industry feels challenged to do better. >> i think it depends on the industry. those are participating will evaluate it. it excludes amazon, because amazon is a cloud-based system, apple, because they are an infrared system, we need to include those as well. >> and mr. castro, mr. parker, you both mention it has been improving dramatically year-by-year. would you say we are weeks, months, years, decades away from getting this technology to an acceptable... >> if you look at the best performing algorithms right now, they are at that level of acceptance we would want. there are error rates of .01%. that is incredibly small. when we are talking about the magnitude between error rates, if you have something 10 times worse, that is still .1% error rate. .1% error rate, that is one out of 10, 000, one out of a thousand, these are very small numbers. >> as mr. castro said, we are reaching that point now. there are some reasons why the industry is really focused on false-negative type error rates in reducing that over time and that is down to extremely low better now than it was five years ago. given the results of demographic effects that it, we are looking at now some of the false-positive rates in trying õto make those more uniform. keeping homogenous rates, those that are mostly the same across different demographic groups. there is important context to consider these in. one was mentioned already, the total relative scale, 100 times, .01 is 1 percent. in some cases, it matters more than others. with law enforcement investigations, and this report it says false-positives they are looking at a set number of candidates that meet a criteria usually like 50. in the case of new york city, they actually looked through hundreds of photos that were potential matches. there is that human element there that the technology functions as a tool to enhance their job. it is up to a human to decide if there is an actual match. in that case, the false-negative that she want to make sure you're not missing anyone in your dataset. >> how do we get this right from our perspective of where we sit? sometimes we step in as the federal government to fix the problem and end up creating an environment that prohibits the technological advancements or the natural market angst that work to make us get to that solution. sometimes it makes us take a step back. what is the right approach? >> facial recognition is just one of many advanced technologies. it is important that the issues that we have are not really having to do with the technology, they have to do with how to use it. any i think we need to focus on the concerns we have to tailor restrictions of foreign to. that is a more sensible approach. we have seen a proposal in the senate that would do something like that. >> i yield back. >> i now recognize ms. kelly for recognizhank you forfor holding this hearing. we talked previously about bias and facial recognition and artificial intelligence generally, but the part three on demographics effects provides useful data on the development of a facial recognition program. i have raised concerns about bias and unfair algorithms and the dangers of allowing these biases to perpetuate. the results of the part three report, but not particularly surprising the women and individuals of african and asian descent having higher false-positive rates than middle aged men. in your testimony, i was hoping you could clarify the statements policymakers and the public should not take facial recognition as always accurate were always error-prone. we should be pushing to have these technologies get as close to always accurate as possible. why should we not strive to think about this technology is always accurate and how long will we have to wait for this technology to reach close to always accurate for all demographic groups? >> figure for the question. thank you for the question. i don't know how long it will be. i can't predict the future. the statement refers to the fact that the characteristics you have to include in any discussion are you have to know the algorithm you're using, that is my testimony that is my testimony stated, if there is substantial bias or demographic effects across the different demographics, the most accurate once do not in the one to0 many categories. you have to know the algorithm you're using in the context. so the ability to automatically identify aunt muriel in any family photo, compare that to the identification of a suspect, where there are some very serious concerns about ensuring you get that right. you have to know the context in which you are using the algorithm, you have to know the algorithm you're using and the overall system. we test mathematical algorithms, we don't have the capacity and we don't test systems that are deployed in the field. those have implications as well. >> while i have you, can you discuss the benefit of auditing facial recognition systems for bias? >> from our perspective, whether it is policymakers or government entities or private sectors entities that want to use facial recognition, the most important thing to do is to understand and have the accurate unbiased data that we can provide so that appropriate decisions are made with regard to whether to regulate or not what kinds of regulations might be needed in what context, if you are in a procurement situation procuring a system, you want to know the performance of that system and the algorithms that it depends on. those are the things that we think are appropriate from an auditing capability or are not in perspective. we don't view the testing we do as an audit as much as providing policymakers and government and the private sector with information. >> i know you talked a little bit about auditing. i would like you to answer. >> i think auditing is absolutely important, but we need to understand how we are measuring these systems. in my written testimony, i give an example of one of the most famous facial recognition measurement systems. a data set we measure these systems against, it is called labeling faces in the wild. in short, it features photos of mainly men and mainly white people. the way the industry assessed accuracy was to be able to recognize white men. that gives us a sense of why we are seeing these pervasive across the systems. it is those standards, don't ask questions about the data that will be used in the system in a deployment environment will be, how these systems will be used if they don't ask questions like what the atlantic tenants were concerned about >> i want to give miss lyons a chance before my time runs out. >> it is critical. the standards being used matter. one of the regulatory options is to have requirements that say government use have to be evaluated or have been ranked by some external objective tester that has clear transparency into what the standards were and how would was measured and how it was done. >> thank you. i yield back. >> the gentleman from texas is now recognized for questions. >> facial recognition is extremely important and viable for our government. places like border patrol in law enforcement. at the same time, there is also no question that this technology allows for any individual to be identified in public spaces be it through the private sector or government entities. therein lies the potential problem and grave concern for many people, both in the private sector or government should bare the responsibility of individual privacy and data security. i am not exactly where this question is best directed. any of you, jump in here. let's start with the private sector. are there companies that are using facial recognition technology. that are addressing this issue of civil liberity. the whole question of privacy. in other words, within the private sector setting forth best practices, any of the stakeholders? >> we have identified the number of companies that have put out principles around privacy. specifically microsoft, amazon, google, they have all had public statements where they identify what specifically they are doing around facial recognition and how they want to protect privacy, how they are doing in terms of development of the technology, what they are doing with developer agreements, what they have to agree to use technologies. >> what other principles, the guidelines? >> things around transparency, consents, data protection, notification, a whole set of issues. this mpáches the guidelines we have seen come out of other forms as well. >> we have a big concern brought up that people are being identified without their consent. so what are the safeguards? it is one thing to have policies and things written down and another thing to implesq't these things and protect the public, protected the individuals who have not consented to this type of technology. so how will these facial recognition products as they develope" inform individuals that they are being expose potentially without their knowledge? >> a number of recommendations are around how you actually communicate to individuals under what circumstances part of the source of confusion i think in some areas is that there is many different types of systems that are out there. some are just doing facial analysis. if you walk by an advertising sign >> without consent? >> without consent. they are just tracking the general demographic of who has seen the ad. they are not tracking anyone's identity. for that type of purpose, they are not going to be obtaining consent, but if they are going require consent, so you have toá have signed up. >> let's go to the atlanta airport, which right now ráhp pilot airport for some facial recognition technology. you have the busiest airport in the world, thousands of people's walking around all over the place. when this technology is implemented, there iáh'o way to get consent from everyone working around. walking around. >> they have the ability to opt out. you don't have to go through that if you're going to be the international terminal. >> how does a person opt out? >> you simply say you don't want to use the self-serve kiosk and you can go up to one agent. >> so you are saying that technology would be used just in the security lines? >> no, for boarding and screening and then checking. each of those areas delta has said they have the ability to opt out and they allow consumers to do >> do you know any case where the government in particular is using this type of technology without the knowledge, without the consent of an individual where it actually violated the forth amendment? >> i don't know that. i don't think we have documentation of that. i do think that is what we need a search warrant requirement so we know when those requests are made. >> i would agree. therein lies the potential problem with all of this. we see the value of the technology, but somehow we have got to land the plane on a safe zone that does not violate people's rights. i appreciate you being here. >> the gentlelady from michigan is now recognized for questions. >> thank you so much, madam chair. this year, i introduce hr 153 with my colleague regarding the need for the development of guidelines for the ethical development of ai. transparency of ai systems and processes and what implications help to one power women and underrepresented or marginalized populations. right now, we have the wild, wild west when it comes to ai. artificial intelligence is not the only emerging technology that requires the development of ethical guidelines. the same discussion must be carried over to the use of facial recognition. there was a member who introduced a statement from the detroit police department. i represent a majority minority district in the city of detroit, it's one of my cities. approximately 67% of my constituents are minorities, meaning the vast majority of my likelihood of being higher misidentified by a system that was intended to increase security and reduce crime. we, last month, released a study that facial recognition test part three, which evaluated facial recognition algorithms provided by the industry to develop the accuracy of demographic groups. the report yielded there are higher rates of inaccuracies for minorities to caucasians. if we develop when algorithms are developed and use a bias process, it is going to give you a bias result. what can we do? first i'll come other should not be any american citizen who is under surveillance where it is not required it is posted and identified in a place to contact the company to say what are you using my image for? we in america have the right to know if we are under surveillance and what are you doing with it. another thing, any release of data that you gather should be required to go through some type of process for the release of that. i can't just put up a camera and other information and then sell it. we had the conversation about the ring doorbell. it is helping to get criminals, but if you are going to give the information from ring to the local police department, there should be some formal process of disclosure and inclusion to the public so that they know that is happening. i am very concerned about the movement of this technology. so, some places have just said, we are not going to use it, and we know this technology is here in moving forward. instead of just saying don't use it, we need to be, as congress, be very proactive of setting ethical standards, have the expectation that our public and say that if i am being if my image is being used, i know and i have a right to what are my rights? that is something i feel strongly about. in your opinion, with so many variations of accuracy in the technology, what can we do that will say that we will take out these biases? we know there have not been the algorithms. what can we do as a congress to ensure we are stopping this? >> thank you for the question. i think when we talk about this technology rp(r'g forward, we have had an industry that has raced forward selling these technologies, marketing these technologies, making claims to accuracy that end up not being totally accurate for everyone. but we have not seen validation rates forward. we have nothing public understanding and new mechanisms for real consent. i think we need to pause the technology and let the rest of it catch up so that we don't allow corporate interests and corporate technology to race ahead and be built into our core infrastructure without having put the safeguards in place. >> the police chief in detroit submitted a record. he made a promise there will never be a trial in court waste solely on facial recognition. there should be something that does not allow for some to be persecuted based on facial recognition that we know this data is not accurate and it has biases based on facial recognition. that is something i think we as a congress should do. thank you. my time is expired. >> thank you. you raise a lot of very good points. the gentleman from ohio is now recognized for questions. >> it is wrong sometimes, isn't it? and it is disproportionately wrong for people of color, is that right? and this all happens in a country, and the u.s. where we now have close to 50 million surveillance or security cameras across the nation. is that right? you can say yes. we talked earlier about context. i think a number of witnesses talked about context. the context of opening your phone is different than your apartment complex having a camera there. but it seems to me the real context concern is what is happening with the government and how the government may use technology. we know facial recognition was used by baltimore police to monitor protesters after the death of freddy gray a few "t vqp)s ago in the city of baltimore. which is scary in and of itself. then of course he had five bullet points. i appreciate what you are doing with the institute that you cofounded, but point number five you said, facial recognition poses an existential threat to democracy and liberty. that is my main concern. how government may use this to harm my first amendment and fourth amendment liberties. you have to think about conte,t even in a broader sense. we have to evaluate it in light of what we have seen the federal government do in just the last several years. you know how many times the fbi lied to the fisa court and the summer of 2016 when they sought a warrant to spy on an american citizen? >> i don't remember the exact number. >> 17 times they misled a court with no advocate looking out for the rights of citizens who is going to lose their liberty. 17 times they misled the court. we found out it was worse. they spied on four americans having to do with the campaign. it is not how facial recognition can be used by the government, we already know it has been like it was used in baltimore to do surveillance protesters. the fbi went after four american citizens associated with a presidential campaign and they misled the court in the initial application 17 times. of course, that is after what happened a decade ago. a decade ago. the irs targeted people for their political beliefs. there was no facial recognition technology there, they just did it. asked them questions like the you pray at your meetings, who is your guess that your meetings? guest at your meetings? when we talk about why we are nervous about this, context is critical. the context that is most critical and most concerning two republicans and democrats on this committee and frankly all kinds of people around the country taking time to look into this a little bit is how the government will use it and potentially violate their most basic liberties. that is what we are out to get. you said in your testimony you are for, bullet point number five, it is time to halt the use of facial recognition in sensitive and social political context. can you elaborate on that? are you looking for a flat-out moratorium on stopping it? what would you recommend? >> thank you for that question and the statement. i would recommend that. i would also recommend that the communities on whom this is going to be used have a say on when it is halted and how it is the point. are they comfortable with the use, do they have the potential harms to themselves and their communities? is it something have they been given the information they need to do that? >> are you talking about any private sector context? the reference would be an apartment complex where you can enter versus a key or something. or are you talking, elaborate on that. >> absolutely. i am talking about both. the baltimore pd example was using private sector technology. they were scanning instagram photos through a service called go that gave them feeds from the protests. they were matching the photos against their facial recognition algorithms to identify people with warrants whom they could then potentially harassed. there is an interlocking relationship between the private sector, who are essentially the only ones with the resources to build and maintain these systems at scale, and the government use of these systems. there are two levels of obscurity, there is law enforcement and military exemption where we do not get the information about the use of these technologies, and then there is corporate secrecy. these interlock to create total obscurity for the people who are bearing the cost of these violating technologies. >> thank you. my time is expired. >> thank you. the gentleman from california is now recognized for questions. >> i know folks think that democrats do not care about liberties or freedoms, but we do. he also care about not only the public space but also in the bedroom and over one spotty. that is the way i approach this issue from a very personal perspective. i made my concerns about technology pretty clear. the dangers it imposes for communities of "t (jjq' used by law enforcements, rachel -- racial biases and artificial intelligence. as i was looking into it, amazon continues to come up because they were the aggressive marketers of this new technology. they do it under a straw out of secrecy. i want to be clear, i know this technology is not going anywhere. it is hard to put limits on technology, especially when using the law and i have seen this time and time again. coming from california, where you have large companies. i understand that the wheels of gove)nment turned slowly. if they can move quickly, they will outpace, out run the government in putting any kind of limitations. you have seen this with scooter companies who have dum(ed thousands of scooters on the street, no regulations and all of a sudden it forces the government to react. we will react and we will start putting , butlimitations on it. there are a lo i know it is tough but there is a lot of questions. one of the things i have been trying to figure out, what agencies and companies, what federal authorities are using it. how are they using it. who sold it to them. if there is a third party validator who has evaluated its accuracy. because when this technology does make a mistake, the consequences can be severe. according to venice study it said an identification of application such as visa, passport or fraud detection could lead to a false accusation, detection or deportation. doctor, the recently released study found that facial recognition technology not only makes mistakes but the mistakes are more likely to occur when an individual is identified as a match -- racial minority, women or children or elderly individual. is that correct? >> for most algorithms we tested, that is correct. >> did the study find that the disparities were almost limited to a few developers or were the bison accuracy more widespread? >> mostly widespread but there were some developers whose accuracy was sufficiently high that the demographic affects were minimal. >> are you aware, i know miss whitaker answered this question, but has amazon ever submitted their technology for review? >> they have not submitted it but we have had ongoing discussions with them about how he can come into an agreement about submitting the algorithm. it is an ongoing conversation. it is an active conversation we are having. >> how long has it been ongoing? >> i do not know exactly, what it has been some months at least. >> this is in the context of them trying to put out a blog post in the blog post, regarding the principles that you are referring to, was down response to a letter that myself and senator martin sent to them. you would think that it would be more than just a blog post, you would think it would be something more serious to the advise of our level of concern. i wanted to ask miss long and miss whitaker, i wanted to ask each of you, can you discuss the implications of the newly release of facial recognition software? what are the potential harms of u.s. bias systems? >> i think the benefit of the report is that it discloses the bias that is present in many of the algorithms being used and gives consumers, both individuals or businesses, who might be selecting these algorithms for use for good information on which to make their choices. i want to make the point that even though a large number of algorithms were tested, those were not equally spread across the market in terms of representing market shares. the vast majority of the market right now, at the high and particularly its government contracts of federal state and local levels, as well as high and commercial uses like the nfl or sports stadiums or venues or amusement parks or things like that. overwhelmingly already employ the algorithms that are at the top and of the spectrum and that have very low air rates. it is not an evenly distributed problem and that is part of the problem is understanding where the algorithms are being used and by whom who are causing the most harm. >> miss whitaker, with that will be my end but i will let you answer. >> thank you. absolutely, it is important to emphasize as mr. jordan did that accurate facial recognition can be harmful. biases one set of problems but it goes beyond that. i think any place were facial recognition is being used for social consequences we will see harm from these racially and gender biased's impacts. i think we can look at that case of willie lynch in florida who was identified slowly based on a low confidence facial recognition match that was taken by an officer of a cell phone photo. he is now serving eight years based on that photo and had to struggle and was eventually denied to get that evidence released during his trial. we are seeing high stakes that really compromised life and liberty here from the use of these biased algorithms. in response to the question of where they are being used, which algorithms are being used here, we do not have public documentation of that information. we do not have a way to audit that and we do not have a way to audit whether they are representing the performance indifferent context like amusement parks or stadiums or whatever else. there was a big gap in the auditing standards, although the audits we have right now have shown extremely concerning results. >> with that, i yield back madam chair. >> thank you. the gentlewoman from west virginia, mrs. miller orom west virginia, mrs. miller questions. >> as technology evolves, it is important that we are on top of it. i saw firsthand how they were using facial recognition when i was in china as a form of payment. i was also exposed to several concerning uses of facial recognition technology. as a committee, it is our responsibility to make sure anything that is done in the united states is done thoughtfully and prioritizes the safety and individual security. mr. parker, when i am at a busy airport i am really glad that i have clear to get through. even though we have tsa, when you are in a hurry it is really nice that you can use recognition and go forward. can you elaborate on some of the examples of beneficial uses for consumers and businesses? >> sure, i will stick to the private sector uses. one really important one is protecting people against identity theft and fraud. e walks here is how it works. e walked into a bank and asks to open a line of credit using a fake drivers license with the customers real information, it is part of the process. that is a comparison made and determined may not be the person they say they are and so they talk to the management. by the time the person is committing fraud is long gone. that is a really useful case for the technology that people do not think about. from our industries, facial recognition is also providing a sense of security. it is typically to augment credential such as keys or cards. these things can be shared, stolen or simply lost. biometrics and choose systems provide additional conveniences for example when -- by an office building for commercial offices during rush times. another example of technology being used organized retail crime and theft which is skyrocketing in recent years hurting american businesses. >> do you think the mainstream media outlets have given an honest portrayal of how this technology is utilized and the reality of its capabilities? >> i do not think so i think it is a complex issue that you are talking about here. it tends to get over simplified and mischaracterized. going back to what i said earlier. the issue is what is causing some concern is about how the technology is used. it is not the technology itself, there is other technologies that can be used in similar ways and so when you think more constructively about what the rules should be about the use of many different types of technology. >> thank you. i have a very good friend and west virginia by -- if we scan both of you, you would not look anything alike. during a house homeland security committee hearing on july 11th, in your testimony you discussed accuracy rates across multiple demographics and how inaccurate results are diminishing. now that you have published the report, is that still accurate? and in what other areas is as technology improving? >> i hope my statement in july was that the most accurate algo)ithms are exhibiting diminishing demographic affects. we certainly do believe that the report that we are really seeing last month confirms that. >> you also stated that anytime the overall performance of this system improves the effects on different demographics decreases as well. is that still something that is still true to this day? >> that is correct. >> good. knowing that accuracy rates have been proved within 2014 to 2018, can you further explain the role of performance rates and why they are important for the and users of t$ese technologies? >> absolutely. it is essential that in the selection of a system, you understand the algorithm that the system uses and so lacked for an accuracy that is sufficiently robust to provide you the minimized risk for the application. in some cases, the implication may have limited risk and the algorithm may not be important. in other cases, the risk may be severe such as identification of suspects or access to critical infrastructure, if there is facial recognition being used with that then you want to have an algorithm basis for your system that is high performing. >> could you speak to where you are really searching techniques that exist to mitigate performances among the demographics and what is emerging research and standards interested in supporting? >> t$ank you for the question. although we did not specify too many of the mitigation is that we would expect people to adopt today, one of the things that we do want to do is to point policy makers and consumers two ways in which this thing can be mitigated. one of the medications can be a determination of the appropriate threshold to set. to ensure that any algorithm that you use, you set an appropriate threshold for the use case. another is a possible use of a separate bio metric. in addition to phase having a fingerprint or an iris scan or some other type of biometrics valdez that would help to reduce the error substantially more. >> thank you. i yield back my time. >> thank you the gentlewoman from massachusetts is recognized for questions. >> the use official recognitions technology continues to grow at a breathtaking pace and is now seeped into nearly every aspect of our daily lives. many families are unaware that their faces are being mind as they walk through the mall, the aisles of the grocery store, or enter their homes, or apartment complexes. even as they drop their children off at school. in response, several municipalities including within the massachusetts seven congressional districts, which i represent. they have stepped up to the plate to protect the residents from this technology. we know that the logical and of surveillance is often over policing and the criminalization of vulnerable and marginalized communities. it is also why i worked with my colleagues on legislation to protect those living in public housing from this technology. more recq'tly, school districts have begun deploying facial analytics in school buildings and in summer camps. collecting data on teachers, parents and students alike. how widespread is the use of this technology on children in schools? >> we are seeing facial recognition systems being implemented more and more in schools. i think the actual number is still very small in terms of percentage penetration of the number of schools in this country. it is certainly spreading and growing. it is one of the huge cases we think is entirely inappropriate. there is no justification for a facial recognition in the schools. they are mostly being used in security applications, sometimes in a sort of response to school shooter scenarios and things like that, which in my opinion they do not adequately address in any meaningful way and it is not the best use of funds or the best way to height securities around school. in securitihe around school. in other part of your question that the facial characterization program, which i think are being yoes more and more in an educational context where we are seeing systems that try to evaluate our students paying attention with the engagement rate and what the responsive rate of students -- as i think was mentioned once earlier in the hearing by someone else, that is based on very questionable data at this point and i think in the not ready for primetime category definitely qualifies in the sense that we are seeing it very quickly applied in many cases that the science and the research is not there to back off and it is particularly concerning when you are talking about children and schools, not only because they are essentially a captive population but because the labels or decisions that might be made about those children's a based on that data might be very, very difficult to later challenge or in any way, reduce the effects on that particular child. >> serious security privacy and concern. your study found that the error rate of facial analytics software actually increased when identifying children. is that correct? >> for most algorithms, that is correct. >> and why was that? >> we do not know the cause and effect exactly. there is speculation that children's faces are -- with less life experience there is less feature rich faces, but we do not know that for sure because of the convoluted tunnel narrow networks. it is difficult to make a determination of the reason. >> many of you mentioned in which these images databases can be vulnerable to hacking or manipulation. misj whitaker, when children images are restored and databases are there any unique concerns that they raised? or may arise? >> absolutely. security for minors is always a concern. >> this technology is clearly biased and inaccurate and more dangerous when used in schools were black and brown students police and disciplined aser higher rates than their white peers for the same minor infractions. in my district, black girls are six times more likely to be sentenced from school and three times more likely to referred to law enforcement. again, for the same infraction as their white peers. our students do not need facial recognition technology. they can miss identify them and lead them to the school and can fight a pathway. last fall i introduced the push out act, which urged schools to abandon over policing and surveillance and instead invest in resources and trauma informed supports, access to counselors and mental health professionals, resources that will really keep our kids safe. my home state of massachusetts, they are leading the flight and saying no to the deployment of facial recognition technology at our schools and i'm grateful for the activism and solidarity on this issue. i would like to include, pardon me, for the record a letter from the bee to you, the naacp, f-18 massachusetts, and tear, the aclu massachusetts and many others urging our state to reject surveillance and policing in our schools. >> with that objection so ordered. >> thank you and i yield. >> thank you. the gentleman from north dakota is now recognized for questions. >> thank you madam chair. there was a couple of things that we should talk about for a second because i think they are important and i am going to go to the fourth amendment in criminal context and how this could be deployed there. this is not the first time we have seen the fourth amendment telephoto lenses, it happen with gps tracker's, drones and now we are at facial recognition. to be fair, the fourth amendment has survived overtime pretty well. biometrics information has a different connotation, which i will get to in the second. i also agree with ranking member jordan that we cannot leave for the course to decide. one of the reasons we can't, is that the courts will take a constitutional view not a popular culture view of privacy. when we are in a civil context and data sharing and these types of issues, i will be the first to admit, my facial recognition did not work on my phone over christmas. you know what i did? i dove immediately to the cell phone store and got a new one. i understand the convenience and those things. the carpenter case is a pretty good example of how at least the u.s. supreme court is willing to change how they view privacy in the digital age. part of our job as congress is that we ensure to write a law and write regulations that ensure that we can maintain those types of privacy standards. one of the reasons biometrics, and i wish people were hearing, is a little different because there's one unique thing in a criminal case that is really, really relevant to facial recognition and that is identity cannot be suppressed. i could surprise a search, i could surprise 40 pounds of marijuana, i could suppress a gun, i could suppress a dead body, but you cannot suppress identity. as we are continuing to carve into these, one of a thing we have to absolutely understand is that in these type of cases, we need to apply a statutory exclusionary world other than other regulations do not matter in a courtroom and two we have to figure out a way for meaningful when we are reviewing this case. when we say we will never prosecute anyone's holding facial identity, at that is a fair statement except there has to be an underlying defensive crime. they are prosecuting on something else. it is really important, i also think it is important to recognize that not all populations are the same. there is a big difference between using facial recognition in a prison setting and even in a tsa or border citing then there is for law enforcement officers walking around the street with a body camera. our people are getting profiled at a campaign rally. we have to continue to have those conversations but i also u(uz point out that one of the things we have to do when we are dealing with these types of things in the law enforcement scenario and i do not care what law enforcement it is, state, local, federal, da, all of those issues. we have to figure out a way to account for false positives. the reason i say that is that i will not use an apple to apples analogy. in north dakota we have drug talks, not all of them but some of them. our law enforcement usually have those. if you are speeding down the street or the highway and going 75 out of 55 and you get pulled over and that highway patrolman happens to have a drug dog in his car and he walked that drug dog on your car my dog alerts and they searcher car and they do not find any drugs and they let you leave and they give you your speeding ticket and you go along a way that data never shows up in that dogs training records. it never shows up. so when you're talking about the accuracy of a dog dog and you are talking about the accuracy of a finding a missing girl or any of those issues, we cannot wait until that situation arises because if there is a missing girl on them all out here i will be the first one standing at the capitol step saying, use whatever technology to deploy, grab everyone you can, let us find this little girl. i agree with that there. you cannot have regulations unless you have meaningful enforcement. one of the concerns i have one deploying the situation and this technology in a law enforcement citing, is that it is very difficult by the nature of how that works to deal with those false positives. my questions are when you are talking about the missing girl how many people were stopped? how many people were stop that were not that person i am glad they found her i am glad they caught the guys, but we have to be able to have a situation in place where we can hold people accountable and the only people i can think of is to continue to develop. and perfect to -- present publish and you have a static population on the problem is that it is a completely variable population i think when we lose we have to recognize the fact that you cannot suppress identity. it is different than a lot of the other technologies because if you are 90% and you stop somebody at 60% and it is still happening to be that person, under current criminal framework i can make that motion, the judge will rule in my favor and say, too bad, still arrested. with that, i yield back.eman from michigan is >> thank you. the gentlewoman from michigan is now recognized for questions. >> thank you madam chair. i think many of you probably know that i am particularly disturbed by the aspect of w facial recognition technology and being used by landlords and property owners to monitor their tenets especially in public housing units. in detroit for example, the city's public housing authority recently installed security cameras on these public housing units that we believed is going to be something that encroaches on to people's privacy and civil liberties. these are people's homes. i do not think being poor or being working class means somehow that you deserve less civil liberties or privacy. miss long, what are the privacy concerns associated that aim facial recognition software to monitor public housing units? if you live and a low income community, is your civil liberties or your privacy lessened? >> thank you for the question. of course not, allow these hopefully not. i think this is a great example of the conversation that needs to happen at the beginning of this, which is what is the problem they are trying to solve by putting this in to a housing complex, any housing complex. what is the landlord or the owners gain. what are they trying to do. is it convenience, level of security, just because it is a really cool technology that they offered him on a discount and wants to use it. what is he trying to gain from it. with that in mind, what are the risks to the occupants, in my opinion that will be a commercial use, which means even if it was not installed only for those residents who chose to opt in and enroll and use it as the way in and out of the building, but for residents who did not want to should not be enrolled in the database. certainly from a civil liberties point of view, if this was being used in some way the other laws about inevitable impact or protected classes do not go out the window because you use technology there. it's still needs to be placed and applied. it is a new way of evaluating them because a new technology and so they raise challenging questions. >> these new technologies, they are for profit, right? >> the company who designs them selves employee profit. >> they are coming into communities like mine that is overwhelmingly a majority of black people. and testing these products, this technology on to people's homes, the parks, the clinics, it is not stopping. i hear my good colleague from massachusetts installing them in schools. they are using this as, and i have a police chief who says this is magically going to disappear crime. if you look, at my residence do not feel less safe. they actually do not like this green light that is flashing outside of their homes and their apartment buildings because for some reason he's telling everyone it is unsafe here. it takes away peoples human dignity when you are being policed and surveillance in that way. now, this is a question for doctor roman, they are now trying to say we are going to use facial technology, what they call? the key fogs? they want to now use access to peoples home using facial recognition technology. one of the consequences of that is miss identification. my colleague on the other side just talked about how we could not even access his phone. i'm really worried that they are testing my people, my residents are being used as testing ground for this kind of technology. do you have any comment in regards to that? >> the only common i have on the prospective is that the algorithm testing that we do is to provide information to people who will make determinations of what is and not an appropriate use. that includes this committee any potential regulations or lack of regulation and any deployment that is made in the private sector or otherwise. it is outside the preview of next. >> i am really proud to be co-leading with congresswoman pressley as well as congresswoman clark and leading no biometrics measures to housing act which would permit banning facial recognition technology and housing buildings and properties. we should be very careful. i think congress men marc meadows is right. i hear some of my colleagues on both sides saying, we have to fix the algorithms. i am not in the business and we should not be in the business of fixing for profit technology industries. these new tools, they give them all these great names. they are process sees in place of human contact. police officers on the street. i increasingly talk about this with the police chief and others and all they can say is, we did this and we were able to do that but like my colleague said, how many people did you have to go through because i watched while they matched a suspect with 170 plus people. i watched as they took a male suspect and matched him with a female. 170. i watched. and the kind of misleading the public of saying, you must not care about the victims. no i actually care about the victims. how about the victims you are now miss identifying. you are increasing. with that, i do really want and i hope you all read this but a report by the detroit community technology projects, it is a critical summary of the choice project greenlight and it's greater context and the concerns of facial recognition technology. i would like to submit it for the record. >> thank you and i really do appreciate your leadership on this and thank you so much chairwoman and doing yet a third hearing on this p'd continuing this critical issue that i know was important to chairman cummings. thank you very much. >> the gentleman from kentucky, is now recognized for question. >> i ask that you bear with me i am battling laryngitis. laryngitis with a bad accent does not spell success. i think there is bipartisan concern here today for facial recognition technology moving forward. my question is for doctoral mine with respect to the national institution for standard testing. what is next role in establishing government why policies? >> the only role that we have with respect to government is providing the scientific underpinning to make sound decisions. as a neutral unbiased expert,-y body, we are able to conduct the testing and provide the scientific data that can be used by policymp)q)s to make sound policy. >> how does it differ from a policy standard? >> certainly technical standards can be used by policy makers. in this case, that determination of a policy that was predicated on identification of algorithms that are based on their performance characteristics is one example of that. from a policy perspective of what to do or what not to do with face recognition technology, that is something we would support with scientific data but not with policy proclamations. >> let me ask you this, is it the right agency to develop government why policies? >> i do not think so. i do not think it is their role. >> good. what is their role in developing accuracy standards for facial recognition technology? >> our role is in evaluating accuracy and in particular, one of the things that we developed over the last 20 years is we appropriate measurements to make these measurements did not exist, we worked with the communities to develop a set of technical standards for not just the measurements itself but how to measure these things including the reporting of a false positive, false negatives, the very detailed of what those constitute. >> thank you. mr. parker, i understand that the security as to what industrial lions his support. they recently released facial recognition policy principles. what are the principles and why do you support them? >> thank you for the question. i think that the chamber put a lot of really good work into developing this framework and basically it mirrors some of the work that was done earlier by the commerce. they convene a multi stakeholder process and included industry but also parties from commercial sectors about what should, what it is an appropriate commercial use look like. some of the principles have to do with transparency but also we are discussing earlier, what should be done as far as consent. i think that is going to cover most can't -- cases. >> can you answer while -- while also promoting industry innovations? >> for the commercial use we are primarily talking about data privacy. it sets a little difference for civil liberty concerns, surrounding government used primarily. let me follow up on this. what is the path ahead look like for these principles? >> i think that the congress debate going on right now about establishing a national framework for data privacy is really an important one. i think this how to set rules for the use of technology in a commercial siding is within that framework. we have had the gdpr in europe and also the united states we have some states who are establishing their own frameworks and that could be a real problem for our economy if we do not establish standardized rules. >> i yield back and balance my time. >> the gentleman from virginia, mr. connolly is recognized for questions. "t rñ i think the chair and thak you so much for a stimulating conversation. it just seems to me that we are going to have to really grapple with one of the parameters of protecting privacy and controlling the use of this technology and one of the traps that i hope on my side of the aisle, particularly, we don't fall into is continuously citing the false id's because if we make the argument this technology is no good because there is a lot of false ideas that may be true today and the concern is legitimate. technology is that it will improve. so what will we spv when it becomes 95% accuraáe? then what? are we conceding the argument that well then you can use it with impunity? i would certainly argue a perspective of its accuracy. there are intrinsic concerns with technology and its use. maybe we have to look at things like opt in and opt out where you require the consent of anybody whose face is at issue to to answered another party. what are your government or not government. mr. parker, you were talking about concerned about -- what do we have any reason to believe the private sector might also generate some concerns? >> that is why we need to establish best practices about how it is used. particularly for errors. >> here's? let me give you a different example. ibm the into photo from a photo hosting site called flicker. it sent the leak to the database 1 million faces to chinese universities. that was the government dor'g it. it was the private entity. it was not about accuracy it, was about an entire data set going to have a track record of accurately using this technology to oppress minorities. we know they are doing that so might you have any concern about a company like ibm engaging in that kind of behavior and transferring an entire data to chinese universities with close ties to the chinese government? >> yes, certainly and i think we have seen fights in u.s. government policy to to establish restrictions on export to a number of chinese companies who are developing this technology we are talking about. >> it's whitaker, your views about that. >> i think that highlights one of the issues that are trying to implement consent raises, which are those photos are on flicker. those are photos that someone may have put on flicker during a very different internet wear facial recognition at scale was not a technical possibility the way it is today. they are now being scraped by ibm, they are being skate by many, many other researchers to comprised these data sets that are then used to train the systems that maybe erroneous, that may target our communities and that may violate our civil liberties. where we ask for consent, how consent could work given that we have a 20-year history where we have clicked through consent notifications without reading them as a matter of habit to get to the core technical infrastructure of our lives remains a big open question. i think we would be able to answer that. >> certainly, we could agree could we not that whether i clicked consent for flicker or any other entity to have access within reason using my photo. i never contemplated having that photo transfer to a foreign government or to a university with close ties to a foreign government. >> yes or to have a corporation use it to train a system that they might sell to law enforcement and weigh that targets your community. there's a lot of things we did not consent to. >> it just seems to me, madame chairman that this is a third hearing where we all express concern. frankly, informed consent about citizens or non citizens whose data, in this case their face may be used and how they may be used to transfer to a third priority. we have some work to do and figuring out the rules of engagement here and how we protect fundamental privacy rights of citizens, unless we want to go down the road of expanding and transferring, excuse me transforming the definition of zone of privacy. that is a very different debate but it seems to me that we can not only can see the technology will drive the terms of reference for privacy. thank you madam chairman. >> thank you. the gentleman from wisconsin, mr. grossman is now recognized for questions. >> i will start with whitaker and anyone can jump in if they want. can you talk about the degree of how this technology is being used in china today? first of all, i would like to thank mr. calmly for his comments. the major problem here is getting false information, i don't think it is the biggest concern, the biggest concern is that becomes more and more and better and better as the evil it is used for. so my colleague seemed to imply that as long as we are not getting any false information, the more information we have the better. i think the less information we have the better. whitaker, go ahead. >> thank you for the question. i want to preface my answer by saying i am an expert on artificial intelligence and i understand the tech industry very well. i am not a china expert however it is very clear that these technologies are being used in china to implement social control and the targeting of ethnic minorities. you have networks a facial recognition systems that are designed to recognize individuals as they go about their daily lives and issue things like tickets, if they jaywalk, if they are recognized by a facial recognition system. >> could it be used to people who attend religious ceremonies and china? >> absolutely. the way the same baltimore police used it when they attended a freddy gray protest. you are seeing it deployed in a different context. >> i attended a rally for president trump. i think 2000s people were there. do you think it is possible that any facial recognition technology is being used there? so you would know who was showing up at the rally and who was hanging around outside before the rally? >> the capacity as an technological afford insist? certainly exists. again, the top of it security as to which the technology is employed both the private and the public sector makes it difficult to speculate on that. we are not -- >> wouldn't surprise you if it was being used there? >> no. >> okay. there is a concern i have. we have a government that has weighed in against certain people and the ranking member pointed out the irs and the past have been showing strong bias against conservatives and we use the power of government against conservatives. we have a major presidential candidate a while ago saying he wanted to take peoples guns. you have to that facial tech technology is being used? i am about to attend a god rally in my district. would it surprise you the facial recognition would be using the database to people will go to the gun show? >> facial recognition is being used to develop against different kinds of data bases. >> okay. kind of concerning there. to me that is the major concerns that our country will work its way towards china as we have, a while back we had a presidential candidate who hostility questioned a prospective judge because they were a member of the nights of columbus, which is kind of scary. can you see the day coming in which we are using facial technology to identify which people are attending a catholic church? >> it is a shame principle as the baltimore police department using it to see who attended a freddy gray rally and target them if they have a warrant. it is already being used in that capacity. irrespective of which group it is targeting or not >> if you set up a catholic church in china do you think the red chinese government would use chinese facial recognitio' technology to know when the future who was a member of the church? identifying in china if you would show up at the nights of columbus meeting? >> again, the technological capability exists but i am a artificial intelligence expert not a chinese geopolitical expert. >> anyone else want to comment on what is going on in china? >> i think is a model for a authoritarian? -- think one of the differences between china and the u.s. is that their technology is announcing state policy. in the u.s., this is primarily corporate technology that is being secretly threaded to our core infrastructures without that kind of acknowledgment. >> amazon, a big player here. >> amazon absolutely. >> they are a very political group, aren't they? or have they expressed wrong political opinions? >> they certainly hire many lobbyists. >> thank you for giving me an extra few seconds. >> thank you. the gentle lady from new york, mrs. cortez. >> thank you chairwoman maloney and thank you again for holding a third hearing on something that is so important and is such an emerging technological issue. we have heard a lot about the risk of harm to everyday people posed by facial recognition. i think it is important for people to really understand how widespread this is. miss whitaker, you made an important point just now that this is a potential to of authoritarian regime, correct? >> absolutely. >> that authoritarianism or that immense concentration could be done by the state as we see in china, but it is also could be executed by mass corporations as we have seen in the united states. correct? >> yes. >> can you remind us, mr. whitaker or miss long, can you remind us of some of the most common ways that companies collect our facial recognition data? >> absolutely. they scrape it from sites like flicker, some news wikipedia, they collect it through massive network market reaches so facebook is a great example of that. >> if you have ever posted a photo of yourself on facebook, that could be used any facial recognition database? >> absolutely by facebook and potentially others. if >> you postatomic a pita? >> yes. >> for using a snapchat or instagram filter or help hone an algorithm for facial recognition? >> absolutely. >> can surveillance camera footage, that you do not even know been taken a view, be used for facial recognition? >> yes. cameras are being designed for that purpose now. >> currently cameras are being designed. people think i'm going to put on a cute filter and have puppy dog ears and not realize that data is being collected by a corporation or the state depending on what country you are in. in order to track and surveil you for the rest of your life. is that consumers are aware of how companies are collecting or restoring their facial recognition data? >> i do not. >> what can a consumer a constituent like mine do what they have been harmed by companies improper collection. a previous hearing we are talking about how facial recognition is often at times have the highest air rates for black and brown americans and the worst implications of this is that a computer algorithm will tell a black person that they have likely committed a crime when they're innocent. how can a consumer or a constituent really have any recourse against a company or agency if they have been misidentified? >> right now there are very few ways. there is the illinois biometrics privacy law that allowed private actors to bring litigation against companies for corporate misuse of biometric data. one you have to know what has been collected, to you have to know it has been misused and three you have to have the resources to bring a suit, which is a barrier to entry that many of those most likely to be harm from this technology cannot surpass. >> let us say you walk into a technology store or as a technology spread you walk into a store in the mall and because the air rates for facial recognition are higher for black and brown folks, you get miss identified as a criminal. you walk out and let's say, an officer stops you and says someone has accused you of a crime or we think you have been accused of a crime. you do not know that facial recognition may have been responsible for you being mistakenly accused of a crime, is that correct? >> that is correct. we have evidence that it is often not disclosed. >> that evidence is often not disclosed, which also compounds on our broken criminal justice system where people very often do not get entitled to the evidence against them when they are accused of a crime, is that correct? >> the willie lynch case in florida's case and point. >> what we are seeing here is that these technologies are almost automating injustices, both in our criminal justice system but also automating biases that compound on the lack of diversity in silicon valley as well. >> absolutely, these companies are not reflective of the general population and the choices they make in the business decisions they make are in the interest of a small few. >> madam chairwoman, i would say that this is some real life like me or stuff that we are seeing here. it is really important that everyone understands what is happening because as you pointed out miss whitaker, this is happening secretly as well, correct? >> yes. >> thank you, that is my time.pd for five >> gentleman from pennsylvania, mr. color is now recognized for five minutes. >> thank you madam chair. i just want to say we all represent many people that are probably not familiar with the commercial and the government's use of facial recognition technology. there is a lot of technology out there. i am grateful for the witnesses to shed a little bit of light on the topic of facial recognition technology and when we look at a proper approach to regulating the use of facial recognition and technology, we need a balance of personal privacy with whatever appropriate use there maybe as a tool to make the government or law enforcement capabilities more effective in what they do. the reason i say this is that several years ago, something happen in a legal community called the cci effect. where television shows exaggerated the prevalence of dna in forensic evidence and the ease of its processing in criminal cases they used a public perception to claim the lack of forensic evidence meant that the police did not do their due diligence. today, many law enforcement television shows and movies utilize and reference facial recognition technology as part of their storytelling. there is a lot of concern here and i have concerns with the fourth amendment in all of our rights that we have and i guess, mr. parker if you can just explain to what extent do you think the current pop culture is filled with an exaggerated or distorted view of how prevalent the use or if there is an appropriate use a facial recognition technology. >> first of all i do think if you look at the portrayal of the technology in the media, it is far beyond what we can do right now. that is one thing to consider. i think the other thing is that we have mentioned earlier unfortunately the government by policy is using technology, not this one but many others, to disperse different groups and it is a horrible example as how technology can be miscues. i think the capability is different there. i am not an expert on china either but to use fake go -- facial recognition -- i suspect i can speak on behalf of my members that we have no interest -- we have no interest in that. that is not the case of right now as a system and i have seen evidence that that is what is intended. that is certainly not a place we want to go. >> you mentioned that technology can be a great tool and it can. it goes with anything. our phones can keep us well connected and do things. it can become a great hindrance and distraction to be used for a lot of malicious and evil things. a lot of people believe with social media and so on. that can happen with anything and it is a matter of how effectively regulate that and make sure it does not used appropriately. are we looking at the possible new cci effect in terms of facial recognition in terms of law enforcement? >> that is a risk and i think you are right to identify that. i think the key here is to have n ahink the key here is to have thorough use policies and constraints. i think there is many uses for that both in the private sector and public sector where that is being done correctly. there is other cases we know less about because there is less transparency. part of that is some accountability measures that ensure use of those systems are audible to make sure that they are only being used for the purpose being specified by the people what authorization to do it. >> appreciate that. this is a very sensitive issue and i do appreciate the opportunity of having these hearings so that more people are aware of what is happening. thank you and i thank you. i recognize the gentleman >> i recognize the gentlewoman from new mexico for questions. >> thank you mister chair. thank you all so much for being here today, we appreciate your time and effort in this hearing. i recently read that some employers have begun using facial recognition technology to help decide who to hire. at certain companies such as hilton and you need a lever, job applicants can complete video interviews using their computer cell phone cameras which collected a on characteristics like an applicants facial movements, vocal tone and were choice. one company offering this technology higher view collects 500 data points and a 30-minute interview. the algorithm then ranks the applicant against other applicants based on the so-called employability score. job applicants who look and sou'd like the most and current i performers at the company receive the highest scores. miss whitaker, i have two questions for you. one, isn't it true that they use a facial recognition and characterization technology and job application process as may contribute to buy a season hiring practices and if yes can you please elaborate? >> it is absolutely true. the scenario that you described so well is a scenario in which you create a bias feedback loop in which the people who are already rewarded and promoted and hire to a firm become -- look at the executive suite at goldman sachs. which uses you see a lot of men and a lot of white men. if that becomes the model for what a successful worker looks like then that is to get a job interview at cold we could see a confirmation bias in which people are excluded from opportunity because they happen to not look like the people who have already been hired. >> thank you so much for that. so miss whitaker, would you agree granting higher employment ability scores who look and sound like high-ranking employees may lead to less diversity in higher meant? >> i would agree and i would also say that that methodology is not backed by scientific consensus. >> thank you. >> miss leone, do you envision any privacy concern that may arise when employers collect store and use data generator from video job interviews? >> yes. thank you for the question. that is absolutely a concern since the individuals may not be aware of what data is being collected especially r# those systems are being used and maybe in a in person interview. there is a camera running was collecting some characterization profile and that the person may or may not be aware of that or whether that is part of the decision-making process for their application. >> thank you so much. like many of my colleagues i have expressed and concerned over this technology. i am concerned that facial recognition technology disenfranchises individuals who do not have access to video or internet devices. broadband internet is an issue and so many rural communities and other communities throughout this country. i am worried that relying on algorithms to will only inhibit the hiring of% a more diverse workforce. doctor row mean, your testimony today highlighted many of these risks. this showed that spcommercial facial recognition algorithms miss identified -- as members of congress, we must develop legislation to ensure we get the best of the benefits of this technology, while minimizing the risks of employment decisions and chairwoman, i yield back. >> that concludes our hearing. we have no other witnesses of the ranking member. i am recognizing him and others at the side of the aisle for five minutes and then we will close with five minutes. >> i think the chair, i will not take all of five minutes, the broad outlines of what we are trying to do legislatively sort of as a start and we are working with the chair and members of the majority as well. it really is an assessment. i am talking a bow what the government is doing, federal government is doing. the first thing we are going to ask is that we want to know which agencies are using knows, how they are using it and to what extent is it happening. several of you testified we just do not know that. we do not know to what extent the fbi is using it or other agencies using it. we found out a few years ago, the irs was using stingray technology which is why does the irs need that for. chris part of what we hope would be legislation that we can have broad support on and that the chairman both republicans and democrats of support is tell us what is going on now. second, while we are trying to figure that out, while the study and we are getting in on accountability of what is happening, let us not expand it. let's just start their. tell us what you are doing and do not do anything while we are trying to figure out what you are doing. once we get that information, then we can move from there. that is where i hope we can start with and frankly what we have been working with for a year. staff both in the majority minority. i see a number of uniting your head, i hope that is some place y'all would be happy and supportive of us doing as a committee and as a congress is to figure what is going on. with that i yield with my colleague. >> see a side with my favorite show when i practice criminal defense. if this would pass a law in effect that shut off everybody's facial recognition on their iphones tomorrow, i think we would have a whole different kind of perspective on this from our citizens. identifying people quickly and easily has so many positive law enforcement safety applications that i think it will be irresponsible to disregard this technology completely. more importantly, i think the private sector -- my intent is not to demonize law enforcement we will demonize the tools and we should i think we should also recognize that there is a very responsible large corporation that want to get this right and they do not want to get it right just for the bottom line, although that is helpful. they have corporate cultures as well and they have more importantly arguing for a federal regulatory framework. part of doing that is recognizi'g that it is here and in some way shape or form it is going to continue to be here and there is a tremendous amount of positive applications that can be used. there are dangers. they are significant dangers. for every reason there is a positive application for identifying people quickly, that is an invasion on everyone's privacy who was in that particular space. we are going to work with it, we are going to continue to use it and we are causing tremendous consumer convenience, there is a lot of different applications but we have to be cognizant to the fact that this is a little different than a lot of the other things because identity is something that can never go away one it has been identified. right to free association in the right to do those things is fundamental in the american population and anything that has a chilling effect on that, has to be very closely and i agree with mr. jordan and when we know how this is being used. i also agree with mr. connolly, technology will advance, human reviews will exist. things will happen. this will get better and better all the time. i do not want any false positives and i do not want any false positive based on race, age or gender. my number one concern is not only those false positives, it is the actual positive. where they are doing, it how they are doing it and why they are doing it. we have to understand while this technology has a tremendous benefit to a lot of people, it poses real significant and unique dangers to fundamental bases of privacy. first amendment right, fourth amendment rights and we have to continue to work forward and i should also say that this is not the first time government has been behind these issues. we are so far behind on piracy, we are so far behind on data collection data sharing in those types of issue. one of the dangers we run into with that is that by the time we get around to dealing with some of these issues, society has come to accept them. how the next generation views privacy in a different than how my generation and generations above us view privacy in a public setting. the world is evolving with technology and this is going to be a part of it going forward so i appreciate people on both sides of the issue and i appreciate the fact that we had this hearing today. with that i yield back. >> i thank all of the panels and all of my colleagues today for participating in this very important hearing. we have another member. he is on his way and he has been misidentified. he is a member of the committee but he has had another committee. he is rushing back to share his experiences with us and i want to allow him to get the information that he has on this issue personally. i do want to say that one of the things that came out of the hearing is that it is not ready for primetime. it can be used in a positive way but it can also, as many witnesses pointed, out mrs. whitaker even to show the case allegedly where a person was innocent yet pointed to jail based on false information of its identity, which certainly áháo be investigated. it can be used for positive ways but also severely impact the civil rights and liberties of individuals. at this point i would like to recognize my colleague for -- but that the "t psq)ican civil liberties uni, showed that was misidentified and i recognize my colleague now. >> thank you madam chair. i did have a constituent at town halls say that my case is a step up from being a member of congress to being a criminal. i was quite offended on behalf of all of us. i really want to thank the chair and the ranking members for having this meeting and it is being important for being in the bay area and having had a relationship. a lot of these tech companies and having that relationship strained recently. the benefit that this technology could give us of over marketing and lack of social responsibility is as most of almost said. in the past i have had a privacy bill in the legislator that is killed in a district eternity and told me about a serial rapist who was getting his victims informations fros a third party data that he was paying for and provided an opt out and it was killed fairly dramatically in the first committee and the assembly after i was able to get out of the senate and try to get mr. gomez to help me in those days. in that context, if i had a dime for every time those companies told me when i had a reasonable question that i was inhibiting innovation, i would be a wealthy person. i appreciate the work you do but in the context of facial recognition and what is a meaningful sort of reflection. i have said this to áhe committee before. americans have a right to be left alone. how are you left alone in this kind of surveillance economy? facial recognition is impo)tant to get it right and in my personal experience but also the overlay of all the other data accumulations. how do we get, first of all what is a danger to allowing companies to absorb a 5 million dollar penalty when they quite consciously and i refer to some of my former friends in the tech company at the bay area is being led by a culture of self righteous sociopaths where they think it is all right to take advantage of people and they get reinforced by the money they make without thinking about the social consequences. given that they are willing to absorb a billion dollar hit like ignoring the settlement that they agreed to, in this kind of culture what is the danger of allowing companies like facebook to having access to not just face short templates but the interaction with all the other data that they are collecting? thank you very much for the question, i think that demonstrates greatly the comment that was made earlier about the relationship between private and profit and publicly uses of technology and how they can feed off each other. if they are beneficial or not so beneficial ways. and you're earlier comment, was to the nature of our surveillance technology, and the underlying question, what is it that we want to accept, them live with in our country, based on our values, and then have this technology and, in that i was not show after show my identification in this building. today but most buildings in washington i would have to show it and show my idea, because this is a government building, i was checked by a scanner for a physical check, but i would hope that would not change, because i can be identified off a video feed that i still have the right to come into this place a government. without that. and i think that demonstrates, that we need to focus on what the things are that we are protecting, that we discussed so clearly here today. in terms of our values and freedoms and liberties, and how we don't let the technology, because it is here, because it can do certain things, or because it's even convenient, that it does certain things, impinge on those in ways that we don't can and we are not ready to accept his compromises. >> so how can americans be left alone, what is the consent look like. >> well in a commercial setting, or a commercial context, but company should not be using facial recognition technology, unless a person that has said they want to use it for the benefit, or convenience it provides, so if i want to use as a member of a limited membership establishment, or to get ip privileges at a hotel, or expedite my check-in at a conference, i can choose to do, that but i would know that i was doing, it i would have to enroll in that system consciously, it's not something that could happen to me without my awareness. >> ok who owns the data when you look at this, we've had issues broke car companies that say they only diagnostics, you have gps, all these private sectors say they own it, should we own it? >> ownership of data is a very complicated topic and we look at it, because it isn't something that should be able to necessarily be sold which is really the nature of property. but in regards to the right to has to use it, that could be should be clearly spelled out, and agreed to if i agree to a certain amount of service, for an rolling in a facial recognition system, i should not have it used for some purposes i'm not aware of. >> thank you very much. >> thank you so much i am so glad you could get back, and just in closing very briefly. i think this hearing showed, that this is a wide scale use, we qr"t u)qq&y it's being used yet. there is very little transparency. of how or why, it's being used. and what security measures are put in place to protect the amer)(p' people, from that use, and their own privacy concerns. we also have the dual challenge, not only of encouraging, and promoting innovation, but also protecting the privacy and safety of the american consumer. i was very much interested, in the passion on both sides of the aisle. to work o' this, and get some accountability, and reason to it. and i believe that legislation should be, bipartisan, i firmly believe that the best legislation is always bipartisan. and i hope you are in a very committed way, with my colleagues on the side of the aisle. and the other side of the aisle. in coming up with common sense facial recognition legislation. i would like to now like to recognize mr. gomez, who was also misidentified, and has personal experience with this. so thank you and thank you very very much to all of our panelists. thank >> you madam chair. first i want to thank all the panels for being here. all the questions that we had, we have twice as many more that we have not had a chance to ask, i want people to walk away understanding that, this is a technology that's not going away, it's just going to get further and further integrated into our lives. through the private sector, through government, and you have to figure out what does that mean. i don't want at the same time, i don't want people to think that, false positives are not a big deal. because of the people, who are falsely identified, as a particular person, and change their life, it's a big deal to õthem. so in people like downplay it as, well to that one person that goes to jail, that one person who gets pulled over, the one person that maybe doesn't make their makeup to work on time. they lose their job, it has a ripple effect and devastation on their lives, it matters to them. and it should matter to all of us. so it's not one of the other, because i do believe this will get better and better better, we have to put the parameters on it, on the use of that technology. that there is still a lot of questions that we have to deal with. but miss whitaker described it correctly, because when i start looking into this this issue, i did run into that brick wall, of national security claims, plus corporate sector saying that we have it's proprietary, this information when it comes to our technology, and were not going to tell you what it says, or how accurate it is, who are selling it, to who is using it. that, that law, must come down. and that's where i think that we share across the political spectrum. how do we make sure that that wall comes down in a responsible way. that keeps innovation going, keeps people safe, but respects their liberties and their freedoms. so with that i yield back manager thank you much for this hearing. >> i thank you >> witnesses without objection all members will have five legislative dayáhr' which to submit, additional questions to the witnesses to the chair, which will be forward to the witnesses for their response. i asked the witnesses to please respond as promptly as you can, this hearing is adjourned. and thank you. >> [captioning performed by the national captioning institute, which is responsible for its caption content and accuracy. visit ncicap.org] >> now you hear me. i hear me, too. good morning. i am ed lengle, senior director for programming. welcome to our humble abode. we had to clear out some junk to make room for you, so sit on best so sit on any so sion

Related Keywords

New York ,United States ,Louisiana ,Hong Kong ,North Carolina ,Brooklyn ,Washington ,Kentucky ,Florida ,China ,California ,Virginia ,Wisconsin ,New Mexico ,Michigan ,Jordan ,West Virginia ,Massachusetts ,Pennsylvania ,North Dakota ,Americans ,Chinese ,American ,Brenda Leong ,Jake Parker ,John Lewis ,Willie Lynch ,James Craig ,Meredith Whitaker ,Wein America ,Charles Romine ,Jay Parker ,

© 2025 Vimarsana

comparemela.com © 2020. All Rights Reserved.