Transcripts For CSPAN3 Federal 20240702 : comparemela.com

Transcripts For CSPAN3 Federal 20240702

The committee will come back to order and the chair recognizes the representative for five minutes. Thank you. I would like to thank our witnesses for being here today to continue to drive the conversation forward. The opportunity of the opportunity to federal government has to implement ai is exciting for the future of the country in the modern workforce. I would like the subcommittee and all of us to consider the impact of ai and other emerging technologies on our younger generation. While ai has numerous benefits i am sure will be discussed here today, it has implications on our youth, especially when it comes to generative images of child exploitation. I would be more than happy to work with the chairwoman and our Oversight Committee to address these concerns, but before we do that, i want to speak about some of the ai frameworks that have been developed specifically the National Institute of standards and technology has a wellestablished track record of developing frameworks and recommendations to improve cybersecurity outcomes in the federal government. Earlier this year, nsit published the management framework which was developed at congresss direction and aim open stakeholder process. Leading companies are using the nist framework for managing ai risks, just as they use the nist cybersecurity framework. Other nist cyber recommendations. With that being said, i would like to ask you, whether or not you see the nist framework being taken up by the said in the same way that nist cybersecurity work is being used today and what steps if any that your office is taking to clement the framework. Thank you so much. I have the great pleasure of leading this many decades ago when my hair was the lack and i share your important point that the role the organization played. In ai, there is race management frameworks. There was one important step in a longer journey to get where they need to actually have safe and effective ai whether its private or Public Sector use. And, you see with industries, adoption of mismanagement frameworks, and i see that approach brings with him in government. What that allows people to do, how to make ai system effective. The questions will be difficult and the processes that they go through will be different. If your organizations is choosing the framework, you are actually asking the question. I want to step back and also be clear that what we all are understanding, what we need as the future, ai systems are safe and effective. They dont do Dangerous Things or inappropriate things that we dont want them to do. But i think we should all be but i think we should all be very cleat that companies and researchers, nobody really knows how to do that. So i think the community of the technology communities, that is still ahead, it is to continue develop tools so that we can get as good at understanding whether an ai system is safe and effective as we know for physical products or many other areas. That is some of the work that still remains. I want to followup and ask about criticism toward the blueprint that ostp has produced. The blueprint that has been criticized for being in conflict with the framework. Can you address this . I would be happy to. They ai bill of rights focused on our values which was so important when, we are in very choppy times. Choppy waters moving so fast. If you go back and look at the bill of rights, it talks about how important it is to make sure that people have access to face systems that are secure. A lot of the same themes that you will find in the Risk Management framework and everything we talk about here today, this is very consistent with the bill of rights. That work was developed by ost, but working very closely with the government, many inputs from private organizations, companies, Civil Society organizations, academics. And when it built management framework, there was a lot of coordination. The bill of rights, part two was the initial steps of how does an Organization Start grappling with one of the processes you put in place to manage these risks. I am out of time. We will be following up with questions and writing. The gentleman from new york yields, but i would like to yield my five minutes back to mr. Langworthy. Thank you very much. I also wanted to bring up any executive orders issued by the last administration, requiring federal agencies to post the public view most of their ai use cases. This is intended to give the public a view to the current and planned use of ai systems. But, many of these agency inventories are missing or they are incomplete, according to Stanford University ai Institute White paper, which was issued last december. You agree that the public has a right to know for what purposes ai used by the federal agencies . And, that it is important for these inventories and they are done consistently, completely, and accurately, we pledge to work to continue to ensure that this is the case . Thank you very much, mr. Langworthy for the question. I share your sentiment on the value of the use cases for all of the reasons that you mentioned. Its important for the public to know and for the government to understand how ai is being used. And, there is important progress is that we are making and will continue to make as a federal government on those ai use cases. Thank you. Transparency, i think, its something we need to fight for, especially as this emerging technology is coming at us so quickly. I want to see if her tory sandboxes have been part of your conversations . The European Parliament approved its ai act includes a conversation about setting up coordinated sandboxes to foster innovation and Artificial Intelligence across the eu. You see regulatory sandboxes having success and whether or not successfulness in the United States . My colleagues may have answers, on that, mr. Langworthy. I do not think that i have enough information to give you a complete answer. I will note that we continue to work with our colleagues and allies around the world, simply because ai is happening everywhere, different regions are taking somewhat different approaches. We are finding with our likeminded allies, we are share the focus on getting to a safe and effective ai future and i think there will be prorated collaborations. I dont know if anyone has other comments . Working effectively with ai with our allies is very important. We are focusing a lot on not only the data sharing but how we do that effectively according to regulations. Also, how we build models together and evaluate effectiveness of the models together. Nothing to add, regulatory approaches. Thank you. I want to focus on the department of homeland security. Are you concerned ai systems becoming more mature and complicated, the criminals will have greater opportunity to commit heinous crimes . We absolutely are concerned there, congressman. We are also looking to this ai to combat crimes. I shared our work with our operations, which used ai to help rescue victims from active abuse as well as to arrest suspected perpetrators. So, as we are looking to better defend against the use of ai to commit these crimes, we are using it to defend against them. It has to be both of the same time as all of the fruits of what ai can bring us have to be there. Our most vulnerable, i believe, those most likely to be harmed by a lot of the technology. Now, expand the scope of the question, include american adversaries. Unleashing increasingly powerful Cyber Attacks against u. S. Critical systems. What is dhs doing . Absolutely. We are and have been for the entire administration concerned about adversarial use of ai, against federal networks. The secretary use the task force, which i colead, and charged us with looking at the use of ai to secure his critical infrastructure. We are looking to work with the cybersecurity of the structure Security Agency to look at how we can partner on safeguarding u. S. As of ai and strengthening subsidy processes. Very good. I yield back to the chairman. The honorable from california, for five minutes. Thank you, mr. Chairman. I thought your description of ai to centex is one of the best ive heard. Was that you rephrase . These things get thrown around. I dont want to claim anything. Its one of been using for a while. I appreciate it. I do not always agree with him, but i thought the oped in the New York Times where he talked about human intelligence and what that entails, and how that is so different than a predictive model that is taking a lot of data and putting probabilistic outcomes, there is false. One of the concerns that i have is that there has been in overhyping of ai as a form of human intelligence which i think is giving our species less credit than we deserve. So, i appreciate it. Your clarification. We have a bill called the search act which would basically call require Government Agencies to use Ai Technologies to help improve this search functions in their own websites and collecting data. Could you help describe what the benefits of having ai do that, trying to search Government Agencies, could be . Thank you for your leadership on that matter. As well as other issues related to ai. And, i think you described it very clearly. If you step back and you think about how much the government does, that is about interacting with citizens, providing information, taking information, those are areas where this new generation of languagebased ai of course can have tremendous benefits but it has to be used thoughtfully and carefully. If you can imagine, people are starting to do this. Using generative ai to summarize complex documents to synthesize arguments from across many different perspectives to draft responses. And i emphasize drafts because anyone who has worked for these technologies knows that we are i think what we are seeing, private and Public Sector, we are finding that there are a few cases that we are relying on a chat bot solving a problem. There are many cases where that interaction might be the beginning of accelerating a workflow or improve that you do whatever you do. I think those are interesting examples and different distinctions and building on top of the Government Agency ways are using ai on sensor data or data that they collect that is not languagebased. This is the next chapter that is starting to unfold and i appreciate the focus on it. When you look at that ai, and these things are hard to predict. How do you think over the next 10 years it will have an effect on jobs . Is it a case of augmenting peoples talent, i said to hollywood, my concern is not that if they had ai bots write all of the descriptions, they wont be able to do. My concern it will be terrible. They will be able to produce hamlet. It will be the further devolution of entertainment. Many of the times, i used the chatgpt, i told my staff to use it for speech and is not as good as cliff notes. And, if professors are having students use it and not getting good grades, because they are not asking the right questions. Classes are not challenging enough. My point is, where is it that they is going to displace things . How do we prepare for it . Where is it going to create opportunity . This focus on the impact of Ai Technology on jobs is critically important because we have a long history. We know technology does change work and all kinds of ways and, i think, its very early. And right now, we do not fully know. How the new generational languagebased ai will blossom and how it will have impacts. The best understanding that many experts have in this area is that there are things that will look like prior changes with technology coming in and there are things that are not going to look the same. What i think we can expect is that some jobs making up skilled and get more valuable, allow people to earn more for the labor and other jobs will get displaced. That is happened with every wave of technology for not just decades but probably for millennia. And, i think what is very different about this new generation, the fact that it can be used to do creative tasks, anything from Graphic Design and image generation to write documents to even legal analysis. So, a lot of the times, professions, people imagine we are not to be touched by Ai Technology now, they will come into the limelight. Im still waiting for chatgpt to come up with something which statistics. We will see. I dont think that is possible, sorry. So, with that, chair recommends the honorable representative, from South Carolina for five minutes. Thank you. Dr. Martel, i want to start off with you. You have defined ai in a way that i have never heard before. And a way that is not really what other searches would define it as. Can you please elaborate on the definition . Actually, does not define all of ai. There is a lot of prior generation ai that is rule based, expert systems write a bunch of the statements. I would not call that statistics. Modern ai is all of modern ai. Based on gathering massive amounts of data from the past, that is the lens into the world, particularly, highly created a data which represents the task at hand. And you can think back to any simple class you had where you did linear regression it holds the model and uses the model to predict the future. And i do not think the Scientific Community would disagree with that as a characterization. I see your point. I havent thought of it that way. Can we talk about possible uses of ai within either dod or adversary military one of the capabilities . One of the reasons that i describe it like that is to have people realize that ai is not monolithic. When we say ai, what we mean is a specific technology or specific technology. Its important to know that because we could be doing well in one case and very poorly in another. That may be so for our adversaries as well. If we focus mostly on ai and as a monolithic thing, we are actually missing where we should be aiming our attention at, the capabilities that we want to deliver or defend against. So we spend a lot of energy characterizing those. I th happy to discuss that in a different i would be happy to discuss that in a different venue. But, there is lots of cases within the business aspects of the department of defense doing analyses of the documents. Understanding the environments using Computer Vision is extremely helpful. In that case, when you think about understanding documents or an image being analyzed and some action being taken from that analyzed damage. It is important. Lets say were looking for something that image, a truck, schoolbus, but the system got it wrong. It is important to us to build systems that are not simply dependent on that algorithm but have humans wrapped around it. It is humanmission teams or human can say we got it wrong. Remember, it will always be the case that every model sometimes get it wrong. It will always be the case amato will get it wrong, so you need a structure to correct the system and feedback the system and make the system better. When it comes to weaponizing ai, one of the benefits is the speed at which you can act. And if you have a drone swarm ai enabled how do you incorporate the human component for the benefit of using ai in that scenario, the speed at which it is able to act. It is an excellent question. Thank you for asking. One thing the military does well is trained with technology. You can think about the way our training works over and over and over again as way for you to a developed justified confidence in the tool. If you have justified confidence, sometimes it would jam but you still get a sense of the likelihood or the conditions under which it might and you learn how to use it. I see where you are going. For the training component to using, planning make sure you answer the question 4000 times before is done with live fire is the solution. Thats right. Sometimes it will get it wrong and whomever made the decision to deploy the system will be responsible. There is always a responsible agent making a decision to deploy the system. What is concerning is that while a military will likely make sure there is a training component nonstate actor that does not care about collateral , a damage or consequences of their actions may be able to use the same Technology Without regards to the necessary collateral damage. 100 correct. That is we see that as a and then particular use case that we should train against. What are the tools and countermeasures we need for that situation. That is why its important to not think about it as monolithic. Thank you for being here. I yield back, mr. Chairman. The chair recognizes the honorable representative higgins from louisiana for five minutes. Four thank you, mr. Chairman thank you, mr. Chairman for waving me on to address this topic thank you for being here,. Thank you for being here, ladies and gentlemen. We appreciate you being you. Madam, in your opening statement, your written statement, you say that one of your quotes say that ai advances also bring the risk of a deepening erosion of privacy as surveillance increases and is more and more Sensitive Information is used to train ai systems. You point out that an authoritarian government already use ai to censor and repress expression and abuse human rights. Is that part of your statement . Yes, sir. It is. Ok. Just clarifying. I have a broader concern that i would like to focus on in my limited time. It is regarding governments use of ai enforcement of laws and in enforcement of laws and regulations. I am strongly against that. And im going to ask you, regarding Law Enforcement, my background, you may not know, i appreciate the work that has then done on the ground at the enforcement level. And i have my concerns there but you referenced it, authoritarian governments use of ai. Talk to us about criminal enterprise or statesponsored cyber threat enterprise and how that would relate to Artificial Intelligence. For instance, malware ai, trojan horse ai. We have all seen major compromises of Cyber Systems at the government level and private sec

© 2025 Vimarsana