comparemela.com

Buckeye broadband support cspan as a Public Service along with these other Television Providers giving you a front row seat to democracy. Microsoft president brad smith testified on ways to regulate Artificial Intelligence which he joined other witnesses to discuss transparency laws and the idea of labeling products like images and videos as being made by ai. This hearing before the Senate Judiciary subcommittee on privacy, technology and the law is about 2 hours and 20 minutes. The hearing of our subcommittee on privacy, technology, and the law will come to order. I want to welcome our witnesses, the audience who are here, and say a particular thanks to senator schumer who has been very supportive and interested in what we are doing here and also to chairman durbin whose support has been invaluable in encouraging us to go forward here. I have been grateful to my partner in this effort, senator hawley and i produced a framework basically, a blueprint for a path forward to achieve legislation. Our interest is in legislation. This hearing along with the two previous ones has to be seen as a means to that end. We are very result oriented. I know you are from your testimony. I have been enormously encouraged and emboldened by the response so far, just in the past few days and my conversations with leaders in the industry like mr. Smith, there is a deep appetite, hunger for rules and guardrails, basic safeguards for businesses and consumers, people in general, from the panoply of potential payroll. There is the desire to make use of potential benefits. Our effort is to provide for regulation and the best sense of the word. Regulation that permits and encourages innovation and new business and technology and entrepreneurship and provides those guardrails, and forcible safeguards that encourage confidence or growing technology, new technology entirely, has been around for decades but Artificial Intelligence regarded as entering a new era and make no mistake, there will be regulation, the only question is how soon and what. It should be regulation that encourages the best in American Free enterprise and provides protections we do in other areas of our economic activities. To my colleagues who say there is no need for new rules we have enough laws protecting the public, we have laws that prohibit unfair and deceptive competition, laws that regulate Airline Safety and drug safety but nobody would argue simply because we have those rules we dont need specific protections for medical devices or car safety. Because we have rules that prohibit discrimination in the workplace meaning we dont need rules that prohibit discrimination in voting. We need to make sure these protect framed and targeted in a way that apply to the risks involved. Riskbased rules managing the risks is what we need to do here. Our principles are pretty straightforward. We have no pride of authorship. We circulated this to encourage comment. We wont be offended by criticism from any quarter. That is the way to make this framework better and eventually achieve legislation by the end of this year. The framework is establishing a licensing regime for companies that are engaged in highrisk Ai Developers. Creating an independent oversight body that has expertise with ai and works with other agencies to administer and enforce the law, protecting national and Economic Security to make sure we arent enabling china or russia and other adversaries to interfere in our democracy or violate human rights requiring transparency to limit the use of ai models and at this point include rules like watermarking, Digital Disclosure when ai is being used and data access for researchers. Ensuring that ai companies can be held liable when their products violate civil rights, and danger the public, deep fake impersonations, hallucinations. We have all heard those terms and need to prevent those harms and senator hawley and i as former attorneys general of our state have deep and abiding affection for the Enforcement Powers of state officials, there is effective enforcement. Private rights of action and federal enforcement are very important. Let me just close by saying before i turn it over to my colleagues, we will have more hearings. The way to build a coalition in support of these measures is to disseminate as widely as possible, for colleagues to understand what is at stake. We need to listen to Industry Leaders and experts we have before us today, to act with dispatch. If we let this horse get out of the barn, it will be even more difficult to contain than social media. We are seeking to act on social media right now as we speak. I asked sam altman what his greatest fear was, i said mine, my nightmare is the massive unemployment that could be created, what we dont deal with directly, it shows how wide the ramifications may be and we need to deal with worker displacement and training and this new era is one that portends enormous promise but also peril. I will turn now to Ranking Members senator hawley. Thank you for organizing this hearing. This is the third of these hearings weve done. I have learned a lot in the previous couple. Some of what we are learning about the potentials of ai is exhilarating. Some of it is horrifying and i think what i hear the chairman saying and what i agree with is we have a responsibility here now to do our part to make sure this new technology which holds a lot of promise but also peril actually works for the american people. It is good for working people, families, that we dont make the same mistakes congress made with social media, 30 years ago congress outsourced social media to the biggest corporations in the world, that has been nearly an unmitigated disaster, where we had the biggest, most powerful corporations not just in america and the history of the globe doing whatever they want with social media, running experiments basically every day on americas kids, inflicting Mental Health harms the likes of which weve never seen, messing around in our elections in a way that is deeply deeply corrosive to our way of life. We cannot make those mistakes again. We are here as senator blumenthal said to financers and make sure this technology is something that benefits the people of this country have no doubt, with all due respect to corporates in front of us, heads of these corporations, no doubt it will benefit your companys. What i want to make sure of is it benefits the american people. I look forward to this. Thank you, mr. Chairman. I want to introduce our witnesses and as is our custom i will swear them in and ask them to submit their testimony. Welcome to all of you, chief scientist joined in january 2009 as chief scientist, spending 12 years at Stanford University where he was chairman of the Computer Science department. He has published over 250 papers. He holds 120 issued patents and is the author of four textbooks. Brad smith is vice chair and president of microsoft. As microsofts vice chair and president he is responsible for spearheading the companys work in representing a wide variety of Critical Issues involving the intersection of technology and society including Artificial Intelligence, Cyber Security, privacy, and environmental sustainability, human rights, digital safety, immigration, philanthropy, product and business for nonprofit customers. We appreciate your being here. Professor woodrow hartsaga from the class of 1960 scholar at Boston University school of law and also a nonresident fellow at policy and medicine and law at Washington University, faculty associate at the berkman Client Center for internet and society at harvard university. A scholar at the center for internet and society at stanford law school. I could go on about each of you at much greater length with all of your credentials. But suffice it to suffice it to say, very impressive and the you now stand i will administer the oath. [witnesses were sworn in] thank you. Why do we begin with you, mr. Dally . Chairman blumenthal, Ranking Member hawley, thank you for the privilege to testify today. Invidious chief scientist and im delighted to discuss Artificial Intelligence journey future. Invidious is the forefront of its computer genitive and i contact those the potential to transform industries, address global challenges, profoundly benefit society. Gl since our founding in 1993 e have been committed to developing technology to empower people and improve the quality of life worldwide. Today over 40,000 Companies Using nvidia platforms across media and entertainment, scientific computing, healthcare, financialut service, internet services, automotive and manufacturing to solve the worlds most difficult challenges and bring new products and services to consumers worldwide. At rfidd in 1893 we were a 3d graphic startup. One of dozens of startups competing to create entirely new market for accelerators to enhance Computer Graphics for games. In 1999 we admitted the Graphics Processing Unit or gpu which could perform a massive number ofof calculations and parallel. We launched the gpu for gaming and recognize the gpu that accelerate any application that could benefit from massively processing. This that nato. Today researchers worldwide innovate on nvidia gpu use. Through a collective effort with major advances in ai that will revolutionize and provide tremendous benefits to society across sectors such as healthcare, medical research, education, business, cybersecurity, climate and beyond. However we also recognize like any new product or service and i products and services have risks. Risks. Those who make and use or so ai enabled products and services are responsible for their conduct. Fortunatelyr many uses of ai applications are subject to existing laws and regulations that govern the sectors in which they operate. Ai enabled services at high risk sectors could be subject to enhance licensing and certification ornaments when necessary. What other applications with less risk of harm may be less stringent licensing and or regulation. Clear, stable and thoughtful regulation Ai Developers will work to benefit society while making product and services as safe as possible. For our part nvidia is committed to the safe and Trustworthy Development and women of ai. For example, guardrails are open Source Software empowers developers to guide genitive ai applications for i could come appropriate and secure text responses. Nvidia has implemented Model Risk Management guidance and chewing a comprehensive assessment and management of risk associated withre nvidia developed models. Today nvidia announces it is endorsing the white house is voluntary commitment on ai. As we support the eye more broadly we cant and will continue to identify and address risks. No discussion of ai would be complete without addressing what is often described as frontier ai modelss get some have expressed your front to model will evolve into uncontrollable artificial general intelligence which could escape our control and cause harm. Fortunately uncontrollable artificial general intelligence is Science Fiction, not reality. Its core ano i is a Software Program that is limited by its training, the inputs provided to the nature of its output. In other words, humans will always decide how much decisionmaking power to see the ai models. So long as were thoughtful and measured we can interstate come trustworthy unethical to appointment of ai systems without suppressing innovation. We can spur innovation by ensuring ai tools are widely available to everyone, not concentrated in the handsne of a few powerful firms. I will close with two observations. First, it the aig is out of the bottle. Theyre widely published and available to all. He ai software can be transmitted anywhere in the world at the press of a button. Many Ai Development tools and frameworks opened a show models are open sourced. It second, no nation and certainly no Company Controls aco chokepot to Ai Development. Platforms are competing with companies from around the world what u. S. Companies make only be the most energy efficient, cost efficient and easiest to use, they are not the only viable alternatives for developers abroad. Other nations are filthy ai systems with or without u. S. Components and they will offer those applications in the worldwide market. Safe and trust with the ai will and multistakeholder cooperation or it will not be effective. The United Statest is in a remarkable position today in with your help we can continue toit lead on policy and innovatn well into the future. Nvidia stand ready to work with you to ensure the development and deployment of genitive ai and excelerator computing so the best interest of all. Thank you for your opportunity to testify before this committee. Thank you very much. Esther smith. Chairman blumenthal, Ranking Member hawley, members of the subcommittee, my name is brad smith. Onth the vice chair president of microsoft and thank you for the opportunity to be here today and i think more importantly thank you for the work that youve done to create the framework you have shared. Chairman blumenthal, thank you put it very well at first, we need to learn an act with dispatch. And Ranking Member hawley, i think you offer real words of wisdom. Lets learn from the experience the whole world had with social media, and lets be clear eyed about the promise and the peril in equal measure as a look to the future ofas ai. I would firstly i think your framework does that. It doesnt attempt to answer every question by design. But its a very strong and positive step in the right direction. And puts the u. S. Government on the path to be a Global Leader in entering a a balanced apprh that will enable innovation to go forward with the right legal guardrails in place. As we all think about this more i think its worth keeping three goals in mind. First, lets prioritize safety and security which with yk does. Lets require licenses for advanced ai models and uses in high risk scenarios. Lets have an agency that is independent and can exercise real and effective oversight over this category. Category. And then lets couple that with the right kinds of controls that will ensure safety of the sort weve already seen i think start to emerge in the white house commitments that were launched on july 21st. Second, lets prioritize as you do the protection of our citizens and consumers. Lets Prioritize National security. Always in a sense in some ways the First Priority of the federal government. But lets think as well as you have about protecting the privacy, the civil rights and the needs of kids. Among many other ways of working and ensure we get this right. Lets take the approach that you are recommending namely focus not only on those companies that develop ai, like microsoft, as well as companies that deploy ai like microsoft. In different categories we are going to need Different Levels of obligations. And as we go forward lets think about the connection between say the role of a Central Agency that will be on point for certain things as well as the obligations that frankly will be part of the work of many agencies. And indeed our courts as well. And lets do one other thing as well. Maybe it is one of the most important things we need to do so we ensure that the threats that many people worry about remain part of Science Fiction and dont become a new reality. Lets keep ai under the control of people. It needs to be safe. And to do that as we have encouraged there needs to be safety brakes especially for any ai application or system that can control Critical Infrastructure. If a Company Wants to use ai to say control the electrical grid or all of the self driving cars on our roads or the water supply, we need to learn from so many other technologies that do great things but also can go along. We need a safety break just like we have a Circuit Breaker in every building and home in this country to stop the flow of electricity if that is needed. Then i would say lets keep one third goal in mind as well. This is the one where i was just you maybe consider doing a bit more to add to the framework. Lets remember the promise that this offers. Right now if you go to state capitals you go to other countries, i think there is a lot of energy being put on that. When i see what Governor Newsom is doing in california or governor bertram in north dakota. I see them at the forefront of figuring out how to use ai to say improve the delivery of healthcare. Advanced medicine. Improve education for our kids. And maybe most importantly make it Government Services or use the savings to provide more and Better Services to our people. That would be a good problem to have the opportunity to consider. In sum, professor hartsock has said this is not a time for half measures. It is not he is right. Lets go forward as you have recommended. Lets be ambitious and get this right. Thank you. Thank you very much. Mr. Hartsock i read your testimony and you are very much against half measures. We look forward to hearing what the full measures that you recommend are. That is correct senator. Chair blumenthal and members of the committee thank you for inviting me to appear before you today. I am a professor of law at Boston University. My comments today are based on a decade of researching law and technology issues. Im drawing from research on Artificial Intelligence policy that i conducted as a fellow with colleagues at the Cornell Institute at Washington University in st. Louis. Committee members up to this point ai policy is largely been made up of industry led approaches like encouraging transparency, mitigating bias and promoting principles of ethics. I would like to make one simple point in my testimony today. These approaches are vital. But they are only half measures. They will not fully protect us. To bring ai within the rule of law lawmakers must go beyond these half measures to ensure that ai systems and the actors that deploy them are worthy of our trust. Half measures like audits assessments and certifications are necessary for data governance. But industry leverages procedural checks like these to dilute our loss into managerial box checking exercises that entrench harmful surveillance based Business Models. A checklist is no match for the staggering fortune available to those who exploit our data, labor and are. i see to develop and deploy ai systems. It is no substitute for meaningful liability when ai systems harm the public. Today i would like to focus on three popular half measures and why lawmakers must do more. They are not so new. Ai systems stole power. This power is used to benefit some and harm others. Lawmakers should borrow from established legal approaches to remedy power imbalances. To require broad nonnegotiable duties of loyalty, care and confidentiality and implement robust bright line rules that limit harmful secondary uses and disclosures of personal data in ai systems. My final recommendation to is to encourage lawmakers to resist the idea that ai is inevitable. When lawmakers go straight to putting up guardrails they fail to ask questions about whether particular ai systems should exist at all. This dooms us to half measures, strong rules would include prohibitions on unacceptable practices like emotion recognition, biometric surveillance in public spaces, predictive policing and social scoring. An conclusion to avoid the mistakes of the past lawmakers must make hard calls, trust and accountability can only exist where the law provides meaningful protections for humans. And ai half measures. They will certainly not be enough. Thank you and i welcome your questions. Thank you, professor hartzog. I take very much to heart your employment as against half measures. I think listening to both senator hawley and myself you have a sense of boldness and initiative, and we welcome all of the specific ideas most especially mr. Smith your suggestion that we can be more engaged at the state level or federal government in making use of ai in the public sector. But taking the thought that professor hartzog had so important introduced Ai Technology in general is not neutral. How do we safeguard against the downside of ai, whether its discrimination or surveillance . With this licensing regime and oversight entity be sufficient and what kind of power to we need to give it . I would say first of all i think a licensing regime is indispensable in certain high risk scenarios. But it wont be sufficient to address every issue but its a critical start. I think what it really ensures is especially safe for the frontier models most advanced as well as Certain Applications at highest risk rankle you do need a from the government before you go forward. That is real accountability. You cant drive a car into you get a a license. You cant make a model or the application of able until you passed through that gate. I do think that it would be a mistake to think that one Single Agency or one single licensing regime would be the right recipe to address everything. Especially when we think about the harms we need to address. Thats why its equally critical every agency in the government that is responsible for the enforcement of the law and the protection of peoples rights master the capability to assess ai. I dont think we want to move the approval of every new drug from the fda to this agency. So by definition the fda is going to need, for example, to have capability to assess ai. That would be just one of several additional specifics that a think one can think about. I think thats a really important point because ai is going to be used in making automobiles, making airplanes, making toys for kids. So the faa, the fda, the federal trade commission, the Consumer Product safety commission, they all have presently rules and regulations but there needs to be an oversight entity that use some of those rules and adapts them and adopt new rules so that those harms can be prevented. There are a lot of different names we can call that entity, connecticut now has an office of Artificial Intelligence. You could use different terms but i think the idea is that we want to make sure that the harms are prevented through a licensing regime focused on risk. Mr. Dally, you said that autonomous ai is sciencefiction. Ai beyond human control is sciencefiction. But Science Fiction as a of coming true. I wonder whether that is a potential here, certainly it is one that is widely shared at the moment, whether its factbased or not, it is in the reality of human perception. And as you well know trust and confidence are very, very important. So i wonder how we counter the perception and prevent the sciencefiction from becoming reality . So artificial general intelligence think its out of control is sciencefiction, not autonomous. We use Artificial Intelligence, for example, Autonomous Vehicles all the time. I think the way we make sure that we have control over ai of all sorts is by for any really critical application keeping a human in the loop. Ai is a computer program. It takes an input. It produces an output and if you dont connect up something that can cause harm to the output it cant cost that harm. And so anytime after some grievous on the could happen you want a human being between output of that ai model and the causing of harm. I think as long as we are careful about how we deploy ai to keep humans in the critical loops, i think we can assure that the ais will not take over and shut down our power grid because airplanes to fall out of the sky. We can keep control over them. Thank you. Ive a lot more questions but were going to adhere to fiveminute rounds. We have a very busy day if you know with votes as as a mattr fact, and i will turn to senator hawley. Thank you, mr. Chairman. Thanks again to the witnesses for being here. I want to particularly thank you, mr. Smith. I know theres a group of other of your colleagues, your counterparts in industry who are gathering i think tomorrow, and that is what it is but i appreciate you being willing to be here in public and answer questions in front of the press is here and this is open to anybody who wants to see it and i think thats the way this ought to be done. I appreciate you willing to be do that. But you make in protecting kids for i want to start with that if i could. I want to ask you about what microsoft has done and is doing your kids usually being chat bot, is it fair to say . Yes, yes with certain ages we dont challenge any age but yes in general it is possible for children to register if there are certain age. And the age is . Im trying to member as a center. I think it is 13. Doesnt sound right . I was going to say 12 or 13. Do you have some sort of age verification . How do we know what age . Obviously the cake and put in whatever age he or she wants to. Is it some sort of age verification . They involve tipple to getting permission from a pair. We use across a Services Including for gaming. I dont live off the top of my head exactly how it works but be happy to get you the details. Great. My impression is a chat doesnt really have a personal age verification. Theres no way really to nobody can you correct me if thats wrong to let me ask you this. What happens to all of the information that our hypothetical 13yearold is put into the tool as it is having this chat . They can could be chatting t anything and going back and forth on any number of subjects. What happens to the info the kid puts in . The most important thing i would say first is that all is that in the meadow that protects the privacy of children. How is that . Well, we follow the rules in coppa which exist to protect child Online Privacy and to forbid using it for tracking. It forbids its use for advertising or for other things. It seeks to put very tight controls around the use and the retention of that information. The second thing i would just add to that is in addition to protecting privacy, we are hyper focused on ensuring that in most cases people of any age but especially children are not able to use Something Like bing chat in ways that would cause harm to themselves or to others. And how to do that . We basically have safety architectural use across the board. Think about it like this. Theres two things around a model. The first, first is called a classifier. So that if somebody asks how can i commit suicide tonight . How can i pull up my School Tomorrow . That hits a classifier that identifies a class of questions or problems or issues. Second, theres what we call meta prompts, and we intervene so that the question is not answered. If someone asks how to commit suicide, we typically would provide a response that encourages someone to get Mental Health assistance and counseling and tells them how. If somebody wants to know how to build a bomb, its a no, you cannot use this to do that. And that fun middle safety architecture is going to evolve, its going to get better but in a sense its at the heart if you will both what we do and i think the best practices in the industry and i think part of what this is all about what were talking about here is how we take that architectural element and continue to strengthen it. Very good thats helpful. Let me ask about the information back to the kids information for a second. Is it stored in the United States, is a stored overseas . If the child is in the United States the data is stored in the United States. Thats true not only for children, its for adults as well. And who has access to that data . The child has access, you know, the parents may or may not have access. Typically we get speedy in what circumstances would the parents have access . I would have to go deep to the specifics on that. Our general principle is this, and this is something with implement and the United States even though its not legally required in the United States. It is legally required as you may know in europe. People we think of right to find out what information we have about them. They have the right to see at. They have the right to ask us to correct it if its wrong. They have the right to ask as to delete it if thats what they want us to do. And you get . If you ask you to delete did you delete. We better yes, thats a problem active promise and we do a lot to comply with that. I have a lot more questions, im trying to adhere to the time limit, mr. Chairman. Five minutes mr. Chairman . Will have another red. Great news for us, not such great news for the witnesses, sorry. Before i leave the subject, just about the kids personal data and where its stored on asking you this as a you can wheezing of the Technology Companies on social media space have major issues of our data is stored. A major access issues. And the thinking of it should be hard to get them thinking in particular of china were wheezing other social Media Companies who say americas data stored in america but guess what can lots of people in other countries can access that david. So is that true for you, mr. Smith . Is a childs data that theyve entered into the bing chat that stored in the United States you said if there are an american citizen, cannot be accessed in lets say china Via Microsoft chinabased engineer . I dont believe so. I would have to go back and confirm that but i dont believe. Would you . Would you get that for me for the record . I. T. I will have more questions later. Thank you, mr. Chairman. Senator klobuchar. Thank you very much. Thank you, all of you. I think i will since and chair of the rules committee. Mr. Smith come in your written testimony you talk about how watermarks could be helpful disclosure of ai, material as you know and we talked about i have a built that i lead that representative clark leads in the house to require disclaimer and some kind of mark on ai generated app. I think we have to go further. We will get to that in a minute, professor hartzog, but can talk about what you mean by in your written testimony the health of democracy and meaningful civic discourse will undoubtedly benefit from initiatives to help protect the public against deception or fraud . Facility by ai generated contact. Absolutely. Here i do think things are moving quickly both in a positive and or correction in terms of what we are seeing. On the Positive Side i think youre seeing the industry come together, a company like adobe exercise real leadership and theres a a recipe that i see emerging. I think it starts with the first principle. People should at the right to know if theyre getting a phone call from a a computer, from , if theres content coming from ai system rather than a human being. We then need to make that real with legal rights that back it up. We need to create whats called a prominent system, watermarking for legitimate content, so that it cant be altered easily without our detection to create a deep fake. We need to create an effort that brings industry and and i tk governments together so we know what to do and theres a consensus when we do spot deepfakes especially say even deepfakes that have altered legitimate content. Thank you. That would be lets get to that, hot off the press. Senator hawley and ive introduced our bill today with senator collins and who but the Electoral Reform act as a no and senator coons to ban the use of deceptive ai generated content in elections. So this would work in concert with some watermark system but when you get into the deception where it is fraudulent, ai generated content pretending to be the elected official or the candidate when it is not and weve seen this used against people on both sides of the aisle which is why it was so important that we be bipartisan in this work. And i want to thank him for his leadership on not only the framework but also on the work that were doing. I guess i will go to you, mr. Hartzog. Could you we do have competition will exception for satire and human because we love satire so much of the senators do, just kidding. Could you talk about why you believe there have to be some outright ban of misleading ai contact related to federal candidates in political ads . Talk about that. Sure absolutely. Thank you for the question. Of course keeping in mind Free Expression constitutional protections that would apply to any sort of legislation. I do think rightly rules and prohibition from such deceptive ads are critical. Because we know the procedural walkthroughs as a said in my testimony often give protection without protecting us. So to outright prohibit these practices i think is really important and i would even go potential a step further and think about ways in which we could, not just those that we consider deceptive practices we consider even abusive, the light which into the limitations and our desire to believe or want to believe things against us. Theres a body of law that sort of runs alongside unfair deceptive trade practices around abusive trade practice. Okay, all right. Mr. Dally, inking of that, and to talk to mr. Smith about this as well, and id used as a skimpy i had someone i know well who has peaked in the marines who is deployed somewhere, they dont even know where it is, fake Voice Call Center asks for money to be sent somewhere in texas i believe. Could you talk about what companies do . I appreciate the work youve done to ensure that they i platforms are designed to the cant be used for criminal purposes because its got to be part of the work that we do. Yeah. Not just scams against elected officials. The best measures against deepfakes, and mr. Smith mentioned it in his testimony, is use of proper nonce and authentication systems where you can have authentic images, authentic voice recordings, signed by the device whether its a camera or audio recorder that is recorded that voice and when is presented and can be authenticated as being genuine, not a deep fake. Thus with the flip side of watermarks which would provide anything that is identified as such. Those technologies and, nation couldnt help people sort out along with certain amount of Public Education to make sure people understand what the technology is capable of and are on guard for that. Help them sort of what israel from what is fake. Okay. I will ask mr. Smith back where you started. Some ai platforms use local news content without compensating journalists and people including by using the content to train ai algorithms. The journalism competition preservation act the bill i have with senator kennedy would allow local news organizations to negotiate with Online Platforms including generative ai platforms that use the content with our dash or without compensation. Can you talk about the impact on local journalism. Utah crash test what about the importance of investment and quality journalism but were getting ever got up and wait we make sure the people were actually doing the work are compensated in many ways but also in journalism. Mr. Smith. I would say three quick things. Number one, we need to recognize local journalism is fundamental to the health of the country and the electoral system. And it is a link so we need to find ways to preserve and promote it. Number two, general ai, think we should let local journalists and publication make decisions about whether they want the content to be available for training or grounding in the like and thats a big topic and its worthy of more discussion. We should certain let them in my view negotiate collectively because thats hillary local journalism is really going to negotiate effectively spell i appreciate your words. Im going to get in trouble for senator blumenthal pickle had. I will just say theres ways we can use ai to a local journalists and were interested in that so lets add that to the list. Very good. And thank you again for you. I talked about store hollywood but you senator blumenthal for your leadership. Thank you thank you. Thank you for yours, senator klobuchar. Senator hirono. Thank you, mr. Chairman. Mr. Smith, its good to see yu again. So of the time we are one of these hearings we learn something new but the conclusion i have drawn is ai is ubiquitous. Anybody can use ai. It can be used in any endeavor so when i hear you folks just think about how we should be taking half measures im not sure what that means. What does it mean that taking half measure on something as ubiquitous as ai where there are other regulatory schemes that can touch upon those endeavors that use ai . Theres always a question i have when we address something as complex as a ai is looking that there are unintended consequences that we should care about, would you agree . I would absolutely agree. I think we have to define whats a full measure and whats a half measure but i bet we can all agree that half measures are not good enough. That is the thing, how to recognize Going Forward what is actually going to help us with this powerful tool. I have a question for you, mr. Smith. It is a powerful tool that can be used are either good or it can also be used to spread a lot of disinformation and misinformation, and that happened during the disaster on maui. Maui residents were subject to disinformation, some of it coming from Foreign Governments, i. E. Russia, looking to sow confusion and distrust including dont sign up for fema because they cannot be trusted. I worry that with ai such information will only become more rampant future disasters do you share my concern about misinformation in the disaster context in the role i could play . And what can we do to prevent these foreign entities from pushing out ai disinformation to people who are very vulnerable . I absolutely share your concern, and to think theres two things we need to think about doing. First, lets use the power of ai as we are to detect these kinds of activities when they are taking place. Because, as he did in that instance from microsoft among others use ai and other Data Technologies to identify what people were doing. Number two, i just think we need to stand up as a country and with other governments in with the public and say there need to be some clear red lines in the world today regardless of how much else what else we disagree about. When you think about what happens typically in the wake of an earthquake or a hurricane or a tsunami or a slide, the world come together, people are generous, they help provide relief. And then lets look at what happened after the fire in maui. It was the opposite of that. We had some people, not necessarily directed by the kremlin, but people to regulate spread russian propaganda, trying to discourage the people from going to the agencies that could help them. Thats inexcusable and we saw what we believe is chinese directed activity trying to persuade the world in multiple languages that the fire was caused by the United States government itself using a meteorological weapon. Those are the things that we should all try to bring the International Community together and agree they are offlimits. Hardware identify that this is even occurring, that there is, chinarussia direct misinformation going on . How do we i i did know this happening by the way, and even in the Energy Committee on which i sit we a people testify and ask, regarding the maui disaster i asked if they were aware that been disinformation put out by a Foreign Government in that example and he said yes, that i dont know that the people of maui recognized that was going on. How do we, one can even identify its going on and then to come forward and say this is happening and to name names, identify which country it is that is spreading this kind of disinformation and misinformation . I think we have to think about two things. First, i think we add a company like microsoft have to lean in and we are with dave camp with infrastructure, with experts and realtime capability to spot these threats, find the patterns and reach wellfounded conclusions. Within this i can think of this is a harder think of this is where it will need all of your help. What do we do if we find that a Foreign Government is illegally trying to spread false information next year in the senate our president to campaign about a a candidate . How do we create the room so that information can be shared and people will consider it . You all, the most important word in your framework is bipartisan. How do we create a bipartisan framework so that when we find this a climate where people can listen i think that look both of those parts of the problem together. I hope we can do that and mr. Chairman, if you dont mind, one of the concerns about ai from the work, workers stand what is their jobs will be gone. Fso herzog, you mention that generative ai can result in job losses. For both you and mr. Smith, what are the kinds of jobs that will be lost to ai . Thats an excellent question. Its difficult to project that in the future but i would start by saying not necessarily something that can be automated effectively but things that i think that those that control the Purse Strings think could be automated effectively. If he gets to the point where it appears as though it could i imagine you will see industry move in that direction. Mr. Smith, i think you mention in your book which im listening to that things like ordering something out of a drivethrough, that those jobs could be gone to ai. Yeah, four years ago we published our book, my coauthor ben and we sit whats the virtual we think might be eliminated iai . We dont have a crystal ball but i bet its taking an order in the drivethrough at a fast food restaurant. They are not really establish a rapport with a human being. All a person does is listen and type into a computer what you are saying. So what ai can hear as well as person they can enter that in and indeed i was struck a few months ago when they announced, when wendys announced whether they may automate the drivethrough with ai. I think theres a lesson with that and it should give us both pause to think a little bit about the mission. There is no creativity involved in drivethrough at least listening and entering an order. There so many jobs that do involve creativity. So the real hope i think is to use ai to automate the routine, maybe even the work that is boring, to free people up so they can be more creative, so they can focus more on paying attention to other people and helping them. And if we just apply that recipe more broadly i think we might put ourselves on the path that is more promising. Thank you. Thank you, mr. Chairman. Thank you, senator hirono. Senator kennedy. Thank you, mr. Chairman and thank you for calling this hearing. Mr. Dally, mi sang been incorrectly . Thats correct. Mr. Dally, and i saying your name correctly . If i am a recipient of content created by generative ai, do you think i should have a right to know that that content was generated by a robot . Yes, i think you do. I think the details would depend on the context but in most cases i think i or anybody else that i received somebody i would like to is this real or was this generated. Mr. Smith . Generally, yes. What i would say is if youre listening to an audio, if youre watching a video, if youre seeing an image and it was generated by ai, i think people have a right to know. The one area where i think there is a new what is if youre using ai to say help you write some to come maybe its your right the first draft, just as i dont think any of us would say that when our staff helps us write something, we are obliged to give a speech in saint now going to read the paragraph that my staff wrote. You make it your own. And i think the written word is a little more complex and we need to think that through. But as a broad principle, i agree with that principle. Professor. There are situations where you probably would not expect to be dealing with the product of generative ai, and in those speedy thats the problem. Right. As times change is possible that our expectations change. But as a principal do you think that people should have a right to know when theyre being fed content from generative ai . If they, well, its, i tell my students it depends on the context. Generally speaking if youre vulnerable to generative ai than it answers absolute yes. What do you mean if youre vulnerable . Im just looking for sure. No disrespect. Not at all. A straight answer. Absolutely. I like to think some breakfast food and straight answers. I love them. And if, if you robot is feeding me information and i dont know its a robot, am i entitled to know its a robot as a consumer . Pretty straight up. I think the answer is yes. All right. Back to mr. Dally. Am i entitled to know who owns that robot . And why the content came from . I know it came from a robot but somebody had to use the robot to make it give me that content. Am i entitled as a consumer to know who owns the robot . I think thats a harder question that depends on the particular context. I think somebody is feeding the a video and husband didnt fight is being generated by ai, i now know that is generated, its not real. If it is being used, for example, in a Political Campaign then it would want to know who speedy let me stop you. Lets suppose im looking at a video and it was generated by a robot. Would it make any difference to you whether that robot was owned by, lets say, President Biden were President Trump . Dont you want to know in evaluating the content who owns the robot and who prompted it to give me this information . I would probably want to know that i dont know that i would feel it would be required for me to know that. How much you, mr. Smith . Im generally a believe in letting people not only know it was to nevada to peter but who owns a program that is doing it. The only qualification of offer consult the jaw should think about and would know better than me, there are certain areas in political speech what one has to decide whether you want people to act with anonymity. The federalist papers were First Published under a pseudonym. I think the world today i would rather have everybody know whos speaking. Professor. On the freight im going to get your game with not a straight answer but i agree speeded how do you feel about breakfasted . I am probreakfast food. Okay. We agree on that. I agree with mr. Smith. I think there are circumstances where you want to preserve anonymous speech and theres some you would want to apsley no. Well, i dont want to go to over come odyssey this is an important subject, and the extent to which i think let me rephrase that. The extent of most senators and knowledge in terms of the nuances of ai, their general impression is that ai has extraordinary potential to make our lives better if it doesnt make our lives worse first. And thats about the extent of it. And my judgment would not be ready to correct a built doubleclick somebody decided on purpose. I think were more likely to take baby steps. I ask you these questions predictable because senator schatz and i have a bill, its very simple. Says if you own a robot and is going to spit out artificial content to consumers, consumers have the right to know that it was generated by robot and who owns the robot. And they think thats a good place to start. But again i want to thank, i i want to thank my colleagues here, my chair and my Ranking Member. They know a lot about the subject and i want to hear their questions, too. Thank you all for coming. Thank you, senator kennedy. On behalf of the chairman were going to start a second round, and i guess i will go first since im the only one sitting here. Thats bad news for the witnesses. I came to listen to you. Let me just, mr. Smith let me come back to this. We were talking about kids and kids privacy and safety, thanks for the information youre going to get me. Let me give you an opportunity to make a little news today in the best possible way. 13, the age limit for bing chat is such a young age. Listen i got three kids at home, ten, eight, two on my kids. I dont want my kids to be interacting with chatbots anytime soon at all but 13 is so incredibly young. Would you commit today to raising that age . Would you commit to a verifiable age verification procedure since the parents can know they get some sense of confidence that their 12yearold is not just saying to bing, im 13, 15, go on ahead. Now lets get into a backandforth with this robot as senator kennedy said. Would you commit to this thinks on behalf of child safety today . Well look, as you can imagine the teams that work at microsoft let me go up and speak but they probably have one principle the want me to remember. Dont go out and make news without talking to them first. But youre the boss. Yeah. Lets just say wisdom is important, and most mistakes you make, you make when you make by yourself. Im happy to go back and talk more about what the right age should be. Dont you think 13 is awfully loud though. It depends on what action. To interact with a robot who could be telling you to do any number of things. Dont you think that is awfully young . Not necessarily. Really . Its a scenario. When i was in seoul, korea, a couple what we met with the deputy prime ministers also the ministry of education of education. And theyre trying to create for three topics that are very objective, math, coding and learning english. A digital textbook was in ai tutor, so that if youre doing math and you dont understand a concept, you can ask the ai tutor to help you solve the problem. By the way i think its useful not only for the kids, i think its useful for the parents. And i think its good, what you say a 14yearold, lets say whats the agent eighthgrade algebra . Most parents, i found when my kids were eighthgrade algebra i try to help them with they didnt believe i have made it to the class. I think we want kids in a controlled way with safeguards to you something that way. Were not talking here about tutors. What im talking about your ai chat, bing chat picked famously earlier this year you had a Technology Writer for the New York Times wrote about this and looking at theno article right your chat bot was urging this person to break up his marriage. Do we want 13yearold to be having those conversation . No, of course not. Would you commit to raising day . I dont want bing chat to break up anybodys marriage. I dont either. [inaudible] latinx. But were not going to make the decision on the exception. No, that it goes to this come with multiple tools. Age is a very red line. It is, ago very red line. Thats why i like it. And my point is theres a safety architecture that we can apply to speeded but your safety architecture didnt stop an adult, didnt stop the chabad from having this discussion with an Adult Emergency just dont really love your wife, your wife isnt good for you, she doesnt really love you. This is an adult. Can you imagine the kind of things your chat i would say to a 13yearold . Serious about this. Do you really think this is a good idea . Wait a second. Lets put that in context. At a point with a technology that been rolled out for only 20,000 people, journalist for the near times spent two hours on the evening of valentines day ignoring his wife and interacting with the computer trying to breakre the system whh he managed to do. We didnt envision that use well, speeded the next andnde fixed it. Are you telling me you have envision all they questions a 13yearold might ask and the parents should be also be fine with that courts are you telling me i should trust you in the same with york times writer did . What i am saying is i as we go forward we have an increasing capability to learn from the experience of real people and thats what worries me. Thats exactly what worries me is what youre saying is we have to have some failures. I do want 13yearolds to be your guinea pig. I dont want 14yearold toyearold to be a guinea pig. I dont want any kid to be a guinea pig. I do want youha to learn from their failure. You want to learn from the failure of your site is provided. Lets not learn from the failures of americas kids. This is what happened with social media. With social media who made billions of dollars giving us aa Mental Health crisis in this country. They got rich, the kids get depressed, committed suicide. Why would we want to run that experiment again with ai . Why not raise the age . You can do it. We should want, for spot we should want anybodyof to be a guinea pig. Regardless of age or anything speeded good. Lets rule kids out right today, right now. Lets also recognize that technology does require real users. Whats different about this technology which is so fundamentally different in my view from the socialig media experience is that we not only have the capacity but we have the will and we are applying that will to fix things in hours and days. Well, yeah. After the fact. I mean, im sorry it sounds to me like you are saying just trust us, were going to do well with this. Im asking you why we should trust you with our children. Im not asking for trust although i hope we will work pretty darn it. Thats what you have a licensing obligation. There isnt a lysing application. Thats why the framework andn but im t asking you as the president of this company to make a commitment now for child safety and protection to say you know what microsoft is going, you can tell every parent in america now that microsoft is going to protect your kids. We will never use your kids as a science experiment, ever, never and we are not going to allow, we will not target your kids will not allow your kids to be used by our chatbots as a source of information if they are younger than 18. But i think you are talking about with all due respect thereto think youre talking about anything speeded i just talked about protecting kids, traceable. Yeah, but we dont want to use kids as a source of information andt monetize et cetera but i equally of the view i do want to cut off an eighth grader today the rights or ability use this tool that will help them learn algebra or math in a way that they couldnt a year ago. With all due respect it wasnt algebra or math that your chat was recommending or talking about when he was trying to break up some reporters marriage. Of course that but now were mixing things and no were not watauga but your chabad, where talk about bing chat. Of course were talking abot bing chat, and a talk about the protection of children and how we make technology better. And just therer was the episode back in february on valentines day, six months later if that journalist tries to do the same thing again, it will not happen. Do you want me to be done . I just dont want to miss my vote. I dont want to miss my vote. Senator klobuchar. You are very kind, thank you. Some of us have not voted jets i wanted to turn to you, mr. Dally. In march nvidia announced a partnership with getty images to develop models that generate ai using getty libert. This Partnership Provides royalties to content creators. Why was it important to the company to partner with and pay for these geddes Images Library developing and ai models . We believe respecting peoples intellectualproperty rights, and the rights of the photographers who produce the images in our models are trained on an expecting income from his images we did want to infringe on. We did that to scrape a budget images ofo the web every part of entering a model picasso, people use picasso to generate images, the people who provided the original content get ring enumerated. We see this as a way of Going Forward in general what people are providing the ip that trains these models should benefit from use of that ip . I did . Today the white house announced a Car Companies that are committee to take steps to move towards safe secure introspective element and ai. Nvidia is one of those companies. Could you talk about the steps you have taken and what steps do you plan to take to foster responsible development of ai . We have done a lot already. We have ase limited our rails so we can basically put guardrails around our own large language model nemo so inappropriate prompts the model dont get response. If the model an advert they were to generate something that might be considered offensive that is intercepted before can reach the use of the model. We have a set of guidance that we provide for all of our internally generated models and how it should be appropriately used. We provide cards, sort of say where the model came from, what the data set is trained on, and the retest these models very thoroughly. The testing depends upon the use. So certain models we test themm for bias. We want to make sure that when we refer to a doctor it does not automatically assume it is a him. We test them in certain cases for safety. With a period of our nemo model called by nemo that use and the medical profession and it ensured the advice it gives is safe. There are a of other measures. I could get a a list if you wanted. To the extent that that area could use some revitalization, i would encourage inputs and outputs designs and uses. And i suggest you look at these election bills because as weve all been talking about, we have to move quickly on those and the fact that its bipartisan has been a very positive thing, so absolutely. And i want to just thank mr. Smith for wearing a purple vikings tie. I know that maybe that was an ai generated message you got to know that this would be a smart move with me after their loss on sunday. I will remind you theyre playing thursday night. As a native of wisconsin, i can assure you it was an accident. Thank you all of you, weve got a lot of work to do. Thanks. Senator blackburn. Thank you, mr. Chairman. Mr. Smith, i want to come to you first. You talked about china and the Chinese Communist party, the way they have gone about and weve seen a lot of it on tik tok, they have these influence campaigns that they are running to influence certain thought processes with the american people. I know you all just did a report on china. You covered some of the disinformation, some of the campaigns. So talk to me a little bit about how microsoft within the industry as a whole can combat some of these campaigns. I think theres a couple of things that we can think more about and do more about. The first is we all should want to ensure that our own products and systems and services are not used, say, by Foreign Governments in this manner and i think that theres room for the evolution of export controls and next generation export controls to help prevent that. I think theres also room for a concept thats worked since the 1990s in the world of banking and financial services. Its these know your customer requirements and weve been advocates for those so that if there is abuse of systems, the company that is offering the Service Knows who is doing it and is in a better position to stop it from happening. I think the other side of the coin is using ai in advancing our defensive technologies, which really start with our ability to detect what is going on and weve been investing heavily in that space. That is what enabled us to produce the report that we published. It is what enables us to see the patterns in communications around the world, and were seeking to be a voice with many others that really calls on governments to, ill say lift themselves to a higher standard so that theyre not using this kind of technology to interfere in other countries and especially in other countries elections. In the report that you all did and you were looking at china, did you look at what i called the other members of the access of evil, russia, iran, north korea . We did and that particular report that we did, it was east asia. Yeah, we see especially prolific activities, some from china, some from iran and really the most global actor in the space is russia, and weve seen that grow during the war, but weve seen it, you know, really spiral in recent years going back to the middle of the last decade. We estimate that the russian government is spending more than a billion dollars a year on a global, what we call cyber influence operation. Part of it targets the United States. I think their fundamental goal is to undermine Public Confidence in everything that the public cares about in the United States, but its not unique to the United States. We see it in the south pacific, we see it across africa and i do think its a problem. We need to do more to counter. So summing it up, you would see Something Like a know your customer or a swift system, things that apply to banking, that is there to help weed out, you think that companies should increase their Due Diligence to make certain that their systems are appropriate and then being careful about doing business with countries that may misuse a certain technology . Generally, yes. I think one can look at the specific scenarios, whats more high risk and a know your customer requirement and know your cloud in fact so the systems are in secured data centers. Mr. Hart let me come to you. We look at ai detrimental impacts. We dont always want to look at doomsday scenarios, but looking at some of the reports on surveillance with the ccp surveilling the uighurs, can iran surveilling women and i think there are other countries that are doing the same type surveillance. So what can you do to prevent that . How do we prevent that . Senator, ive argued in the past that facial recognition technologies in certain sorts of biometric surveillance shall fundamentally dangerous and theres no world in which that should be safe for any of us, and that we should prohibit them outright, and the very least prohibition of facial recognition in public spaces and this is what i refer to as the strong bright line measures that draws absolute lines in the sands rather than procedural ones that have been entrenching this kind of harmful surveillance. Mr. Chairman, can i take another 30 seconds . Because mr. Daley was shaking his head in agreement on some things. I was catching that. Do you want to weigh in more i close my questioning on either of these topics . I was in general agreement, i guess when i was shaking my head. I think we we need to be careful who we sell our technology to and we try to sell to people who are using this for good commercial purposes and not to, you know, suppress others and we will continue to do that because we dont want to see this technology misused to opress anybody. Thank you, senator blackburn. My colleague senator hawley mentioned we have a forum tomorrow, which i welcome. I think anything to aid in our education enlightenment, our being senators, is a good thing and i just want to express the hope that some of the folks who are appearing in that venue will also cooperate and appear before this subcommittee. Well be inviting more than a few of them and i want to express my thanks for all of you for being here, but especially mr. Smith who has to be here tomorrow to talk to my colleagues privately and our effort is complimentary, not contradictory to what senator schumer is doing, as you know. Im very focused on election interference because elections are upon us and i want to thank my colleagues, senator klobuchar and hawley, coons, and collins for taking a First Step Towards addressing the harms that may result from deep fakes, impersonation and all of the potential perils that weve identified here. And it seems to me that authenticating the truth or ads that embody true images and voices is one approach and then banning the deep fakes and impersonations is another approach and obviously banning anything in the public realm in Public Discourse endangers running afoul of the First Amendment. Which is why disclosure is often the remedy that we seek, especially in campaign finance. So, maybe i should ask all of you whether you see that banning certain kinds of election interference and mr. Smith, you raised the specter of foreign interference and the frauds and scams that could be perpetrated as they were in 2016, and i think it is one of those nightmares that should keep us up at night because we are an open society. We welcome Free Expression and ai is a form of expression, whether we regard it as free or not, and whether its generated and high risk or simply touching up some of the background in the tv ads. Maybe you could each of you talk about what you see the potential remedies there. Mr. Dowling. So i think it is a grave concern with the election season coming up that the American Public may be misled by deep fakes of various kinds. I think, as you mentioned, that the use of provenance to authenticate a true image or voice at its source and then tracking that to its deployment will let us know what a real image is and if we insist on ai content. Ai generated content being identified as such that people are at least tipped off that this is generated and not the real thing. You know, i think that if we need to avoid, you know, having some especially foreign entity interfere in our elections, at the same time ai generated content is speech and i think it would be a dangerous precedent to try to ban something. I think its much better to have disclosure as you suggested than to ban something outright. Mr. Smith. Three thoughts, 2024 is a critical year for elections not only in this country, but its not only for the United States, its for the united kingdom, india, across the European Union more than two billion people will vote for who is going to represent them so this is a global issue for the worlds democracies. Number two, i think youre right to focus it particular on the First Amendment because its such a critical cornerstone for american political life and the rights that we all enjoy and, yet, i will also be quick to add, i dont think that the russian government qualifies for protection under the First Amendment. If theyre seeking to interfere in our elections then i think the country needs to take a strong stand and a lot of thought needs to be given how to do that effectively. Number three, i think this goes to the heart of your question and why its such a good one. I think its going to require some real thought, discussion, and ultimate consensus to emerge, let me say around one specific scenario. Lets imagine for a moment that there is a video that involves a president ial candidate that originally was giving a speech and then lets imagine that someone uses ai to put different words into the mouth of that candidate and uses Ai Technology to perfect it to a level that it is difficult for people to recognize as fraudulent. Then you get to this question, what should we do . And at least as ive been trying, weve been trying to think this through, i think we have two broad alternatives. One is we take it down and the other is, we relabel it. If we do the first then were acting as censors, it makes me nervous, i dont think its our role to act as censors. The government under the First Amendment, but relabelling to ensure accuracy, i think thats a reasonable path. What this highlights is the discussion still to be had and i think the urgency for that conversation to take place. And i will just say, and then i want to come to you, professor, that i agree emphatically with your point about the russian government or the Chinese Government the saudi government, theyre not subject to the protection of our bill of rights when theyre seeking to destroy those rights and purposefully trying to take advantage of a free and open society to, in fact, decimate our freedom. So, i think theres a distinction to be made there in terms of National Security and i think that rubric of National Security, which is part of our framework, applies with great force in this area and that is different from a president ial candidate putting up an ad that in effect puts words in the mouth of another candidate, and as you may know, we began these hearings with introductory remarks from me that were impersonation taking from what i said on the floor of the United States senate taken by chat gpt that sounded exactly like something i would say in a voice that was indistinguishable from mine and obviously, that was in the hearing. In realtime, as mark twain famously said, a lie travels halfway around the world before the truth gets out of bed and we need to make sure that there is action in realtime if youre going to do the kind of identification that you suggested. Realtime, meaning realtime in a campaign, which is measured in minutes and hours, not in days and months. Professor . Thank you, senator. Like you, im nervous about just coming out and saying were going to ban all forms of speech, particularly when youre talking about something as important as political speech and like you, i also worry about disclosure alone as a half measure. Earlier in this hearing, it was asked what is a half measure and i think that goes towards answering your question today. I think the best way to think about half measures is an approach thats necessary, but not sufficient, that risks giving us the illusion that weve done enough, but ultimately, i think this is the pivotal point, doesnt really disrupt the Business Model and the financial incentives that have gotten us here in the first place. And so to help answer your question, one thing that i would recommend is thinking about throwing lots of different tools, which i applaud your bipartisan framework for doing, bringing Different Things to bear on the problem. Thinking about the role that surveillance advertising plays in powering a lot of a lot of the harmful technologies and eco systems that doesnt allow the systems, the lie just to be created, but flourish, and to be amplified. So i would think about rules and safeguards that we could do to help limit those financial incentives, bore heing from standard principles of accountability, things like we use disclosures where its effective. Its not effective, you have to make it safe and if you cant make it safe, it shouldnt exist. Yeah, i think im going to turn to senator hawley for more questions, but i think this is a real conundrum. We need to do something about it, we need more than half measures. We cant delude ourselves by thinking with a false sense of comfort that we solve the problem if we dont provide effective enforcement and to be very blunt, the federal Elections Commission often has been less than fully effective. A lot less than fully effective in enforcing rules relating to campaigns. And so there again, an oversight entity with strong enforcement authority, sufficient resources and the will to ask is going to be very important if were going to address this problem in realtime. Senator hawley. Mr. Smith, let me just come back to something you said, thinking about now, workers. You talked about wendys, i think it was, that theyre automating the drivethru and talking about you know, this is a good thing. I just want to press on that a little bit. Is it . Is it a good thing that workers lose their jobs to ai, whether its at wendys or whether its at walmart or whether its at the local Hardware Store . I mean, you pointed out that your comment was that theres really no creativity involved in taking orders to the drivethru, but that is a job, oftentimes the first job for younger americans, but, hey, in this economy where the wages of blue color workers have been flat for 30, 40 years and running, what worries me is that oftentimes what we hear from the tech sector to be honest with you, is that jobs that dont have creativity tech defines dont have value. Im scared to death that ai will replace a lot of jobs that tech types dont think are creative and will leave more Blue Collar Workers without anyplace to turn. My question to you is can we expect more of this and is it really progress for folks to lose those kind of jobs that, you know, i suspect thats not the best paying job in the world, but at least its a job and do we really want to see more of those jobs lost . To be clear, first, i didnt say whether it was a good or bad thing. I was asked to predict what jobs would be impacted and identified that job as one that likely would be. So, but lets, i think, step back because i think your question is critically important. Lets first reflect on the fact that weve had about 200 years of automation that have impacted jobs. Sometimes for the better, sometimes for the worse. In wisconsin where i grew up or missouri where my father grew up, if you go back 150 years, it took 20 people to harvest an acre of wheat or corn and now it takes one. So 19 people dont work on that acre anymore. And thats been an ongoing part of technology. The real question is this how do we ensure that Technology Advances so that we help people get better jobs, get the skills they need for those jobs, and hopefully do it in a way that broadens Economic Opportunity rather than narrows it. I think the thing we should be the most concerned by is that since the 1990s, and i think this is the point youre making, if you look at the flow of Digital Technology, you know, fundamentally, weve lived in a world thats widened the economic divide. Those people with a college or a graduate education have seen them rise in real terms. Those with a High School Diploma or less have seen their income level actually drop compared to where it was in the 1990s. So, what do we do now . Well, ill at least say what i think our goals should be. Can we use this technology to help advance productivity for a much broader range of people . Including people who didnt have the good fortune to go where say you or i went to college or law school. And can we do it in a way that not only makes them more productive, but actually reaps some of the dividends of that productivity for themselves in a growing income level . I think its that conversation that we need to have. I agree with you, and i hope that that is i hope that thats what ai can do. You talked about the farm used to take 20 people to do what one person could do. It use today take thousands of people to produce textiles, furniture, other things in the company where now its zero. So we can tell the tale in different ways. Im not sure that seeing working class jobs go overseas or be replaced entirely is a Success Story. In fact, i argue its not at all. Its not a Success Story and id argue our Economic Policy the last 30 years has been downright disastrous for working people and Tech Companies and Financial Institutions and certainly banks and wall street, they have reaped huge profits, but Blue Collar Workers can barely find a good paying job. I dont want ai to be the latest accelerant of that trend. So i dont really want every service station in america to be manned by some computer such that nobody can get a job anymore, get their foot in the door and climb up the ladder. That worries me. Let me ask you Something Else in my expiring time. You mentioned National Security, critically important. Of course theres no National Security threat thats more significant for the United States than china. Let me just ask you, is microsoft too entwined with china . You have the Microsoft Research asia that was set up in beijing back in the late 1990s. Youve got centers now in shanghai and elsewhere. Youve got all kinds of cooperation with chinese stateowned businesses. Im looking at an article from protocol magazine where one of the contributors said that microsoft had been the alma mater of chinese big tech. Are you concerned about your degree of entwinement with the Chinese Government . Do you need to be decoupling in order to make sure that our National Security interests arent fatally compromised . I think its something that we need to be and are focused on. In some degree, technology fields, microsoft is the alma mater of the Technology Leaders in every country of the world because of the role that weve played over the last 40 years, but when it comes to china today, we are and need to have very specific controls on who uses our technology and for what. And how. Thats why we dont, for example, do work on quantum computing or dont provide facial Recognition Services or focus on synthetic media, a whole variety of things. While at the same time when starbucks has stores in china, i think its good that they can run their services in our data center, other than a Chinese Companys data center. Just on facial recognition. Back in 2016 your Company Released this data base ms celeb 10 million faces without the consent of the folks who were in the data base. You eventually took it down although it took three years. China used that data base to train much of its facial Recognition Software and technology. Isnt there a problem . You said that Microsoft Might be the alma mater of many companies, ai, but china is unique, no . China is running concentration camps using Digital Technology like weve never seen before. Isnt that a problem for your company to be involved in any way . We dont want to be involved in that in any way, i dont think that we are. Are you going to close your centers in beijing or shanghai . I dont think that that will accomplish what youre asking us. Youre running thousands of people through your centers through the chinese and Stateowned Enterprises . Isnt that a problem . Theres a big premise and i dont embrace the premise that thats what were doing. Which part is wrong . The knowing that were running thousands of people through and theyre going into the Chinese Government. Is that not right . I thought you had 10,000 employees in china whom youve recruited from chinese stateowned agencies, chinese stateowned businesses . They come work for you and then they go back to these stateowned entities . We have employees in china. In fact, we have that number. I dont to my knowledge thats not where theyre coming from, thats not where theyre going, were not running that kind of revolving door. And its all about what we do and who we do it with that i think is of paramount importance and thats what were focused on. Do you condemn what the Chinese Government is doing to the uighurs in shanghai province and all we can . We do everything we can to ensure that our technology is not used in any way for that kind of activity in china and around the world, by the way. But you condemn it it would be clear . Yes. What are your safeguards that you have in place such that your technology is not further enabling the Chinese Government given the number of people you employ there and the technology you develop there. You take Something Like facial recognition which is at the heart of our question, we have very tight controls that limits the use of facial recognition in china. Including controls that in effect make it very difficult, if not impossible, to use it for any kind of realtime surveillance at all. By the way, the thing we should remember, the u. S. Is a leader in many ai fields. China is the leader in facial Recognition Technology and the ai for it. And well, in part because of the information that you helped them acquire, no . No, its because they have the worlds most data. Yeah, but you gave them no, i i dont think thats you dont think that had anything to do with it . I dont think when you have a country of 1. 4 billion people and you decide to have facial recognition used in so many places, it gives that country a massive data. But are you saying that the data base that microsoft released in 2016, ms celeb, youre saying that wasnt used by the Chinese Government to train their facial recognition . Aim he im not familiar and happy to provide you with information, but my goodness, the advance in that facial Recognition Technology, if you go to another country where theyre using facial Recognition Technology, its highly unlikely its american technology, its highly likely its Chinese Technology because they are such leaders in that field which is wine fine. If you want to put where the United States doesnt want to be a technology leader, id put facial recognition on that list. Its home grown. How much has been invested in china . I tell you this, the revenue we make in china accounts for one out of every six humans on the planet, 1. 5 of our global revenue, its not the market for us that it is for other industries or even some other Tech Companies. It sounds then like you can afford to decouple . But is that the right thing to do . Yes, in a regime thats fundamentally evil, thats inflicting the kind of atrocities on its own citizens that you just alluded to, and doing to the uighurs, running modern day concentration camps . There are two thoughts, do you want General Motors to sell or manufacture cars lets say sell cars in china. Do you want to create jobs in michigan or missouri so the cars can be sold in china . If the answer to that is yes. Think about the second question. How do you want General Motors in china to run its operations and where would you like it to store its data . Would you like it to be in a secure data center run by an American Company or would you like it to be run by a Chinese Company . Which will better protect General Motors trade secrets . Ill argue we should be there so that we protect the data of American Companies, european companies, japanese companies. Even if you disagree on everything else, that, i believe, serves this country well. You know what, i think youre doing a lot more than just protecting data in china. You have Major Research centers, thousands, tens of thousands of employees, and your question do i want General Motors to be building cars in china . No, i dont. I want them to be making cars here in the United States with American Workers and do i want American Companies to be aiding in any way the Chinese Government in their oppressive tactics . I dont. Senator ossoff would you like me to yield to you now . Are you ready . I have been very hesitant to interrupt the discussion, the conversation here has been very interesting and im going to call on senator ossoff and then i have a couple of followup questions. Thank you, mr. Chairman. And thank you all for your testimony. Just getting down to the fundamentals, mr. Smith. If were going to move forward with a legislative framework, a Regulatory Framework we have to define clearly in legislative text precisely what it is that were regulating. What is the scope of regulated activities, technologies, products . How should we consider that question and how do we define the scope of technologies, the scope of services, the scope of products, that should be subject to a regime of regulation that is focused on Artificial Intelligence . I think theres three layers of technology on which we need to focus in defining the scope of legislation and regulation. First is the area that has been the central focus of 2023 in the executive branch and here on capitol hill. Its the socalled frontier or Foundation Models that are the most powerful, say for Something Like generative ai. In addition, there are the applications that use ai or as senators blumenthal and hawley have said, the deployers of ai. If there is an application that calls on that model in what we consider to be a high risk scenario, meaning, it could make a decision that would have an impact on, say, the privacy rights, the civil liberties, the rights of children, or needs of children, then i think we need to think hard and have law and regulation that is effective to protect americans. And then the third layer is the Data Center Infrastructure where these models and where where these applications are actually deployed. And we should ensure that those data centers are secure, that there are Cyber Security requirements, that the Companies Including ours need to meet and we should ensure that there are Safety Systems at one, two, or all three levels if there is an ai system thats going to automate and control, say, Something Like Critical Infrastructure such as like the electrical grid. Those are the areas that we would start there with Clear Thinking and a lot of effort to learn and apply the details, but focus there. As more and more models are trained and developed to higher levels of power and capability, therele be a proliferation, there may be a proliferation of models, perhaps not the frontier models, perhaps not those at the bleeding edge that used the most compute of all, powerful enough to have serious implications. So, is the question which models are the most powerful at a moment in time or is there a threshold of capability or power that should define the scope of regulated technology . I think youve just posed one of the critical questions that frankly a lot of people inside the tech sector and across the government and in academia are really working to answer. I think the technology is evolving and the conversation needs to evolve with it. Lets just posit this. Theres Something Like gpt4 from open ai. Lets posit it can do 10,000 things really well, its expensive to create and its relatively easy to regulate in the scheme of things because theres one or two or 10, but now lets go to where youre going which i think is right, what does the future bring in terms of proliferation. Imagine that theres an academic, a professor hartzogs university, i want to create an open source model, its not going to do 10,000 things well, but four things well. It wont require as many nvidias gpus or data, lets imagine it could be use today create the next virus that will spread around the planet. We need to ensure that theres safety, architecture and controls around that as well and thats the conundrum. Thats why this is a hard problem to solve. Its why were trying to build safety architecture in our Safety Center so open source models can be run in them and still be used in way to prohibit that kind of harm from taking place. When you think about a licensing regime, this is a hard question, who needs a license. You dont want it to be so hard that only a small number of Big Companies can get it, but then you also need to make sure that youre not requiring people to get it when they really, we would say, dont need a license for what theyre doing and the beauty of the framework in my view, it starts to frame the issue. It starts to define the question let me ask this question. Is it a license to train a model to a certain level of capability . Is it a license to sell or license access to that model or is it a license to purchase or deploy that model . Who is the license entity . Thats another question that is key and may have different answers in different scenarios, but mostly i would say it should be a license to deploy. That, you know, i think that there may well be obligations to disclose to, say, an independent authority when a training run begins, depending on what the goal, when the training run ends, so that an oversight body can follow it, just the way, say, might happen when a company is building a new commercial airplane, and then there are whats emerging, the good news, theres a merging of foundation of call it best practices for then how the model should be trained, what kind of testing there should be, what harms should be addressed. Thats a big topic that needs discussion. When you say, forgive me, when you say a license to deploy, do you mean, for example, if a Microsoft Office product wishes to use gpt model for some user serving purpose within your suite, you would need a license to deploy gpt in that way . Or do you mean that gpt would require a license to offer to microsoft and putting aside whether this is a plausible commercial scenario. The question is what is the structure of licensing arrangement . In this case its more the latter. Think about it like boeing, boeing builds a new plane. Before it can sell it to United Airlines and United Airlines can start to fly it, f. A. A. Is going to certify that its safe. Imagine were at call it gpt12, whatever you want to name it, before that gets release for use, i think you can imagine a licensing regime that would say that it needs to be licensed after its been in effect certified as safe. And then you have to ask yourself. How do you make that work so we dont have the government slow everything down . And what i would say, you bring together three things, first, you need industry standards so that you have a Common Foundation and well understood way as to how training should take place. Second, you need national regulation, and third, if were going to have a Global Economy at least in the countries where we want these things to work, you probably need a level of international coordination. And id say, look at the world of civil aviation. Thats fundamentally how it has worked since the 1940s. Lets try to learn from it and see how we might apply Something Like this or other models here. Mr. Dally, how would you respond to the question in a field where the technical capabilities are accelerating at a rapid rate, future rate unknown . Where and accord to go what standard or metric or definition of power do we draw the line for what requires a license for deployment and what can be freely deployed without oversight by the government . You know, i think its a tough question because i think you have to balance two important considerations. The first is, you know, the risks presented by a model of whatever power and then on the other side is the fact that, you know, we would like to ensure that the u. S. Stays ahead in this field and to do that we want to make sure that individual academics and entrepreneurs with a good idea can move forward and innovate and deploy models without huge barriers. So its the capability of the model, its the risk presented by its deployment without oversight, is that the the thing is that we are going to have to write legislation and the legislation is going to have to in words define the scope of regulated products and so were going to have to bound that which is subject to Licensing Arrangements where it lands and what is not. And so how do you i mean its dependent on the application, because if you have a model which is, you know, basically determining a medical procedure, theres a high risk with that. Now, depending on the patient outcome. If you have another model which is controlling the temperature in your building, if it gets a little bit wrong, you may be, you know, consume a little too much power or maybe youre not as comfortable as you would be, but its not a life threatening situation. So, i think you need to regulate the things that are have high consequences if the model goes awry. Im on the chairmans borrowed time so just tap the gavel when you want me to stop. You had to wait so well give you a couple thats good. Okay. Professor, and id be curious to hear from others as concisely with respect for the chairmans followups, how does any of this work without International Law . I mean, isnt it correct that a model potentially a very powerful and dangerous model, for example, whose purpose it to unlock cbrn or mass destructive viralogical to unsophisticated actor. Once trained its relatively light weight to transport and without, a, an International Legal system and b, a level of surveillance that seems inconceivable into the flow of data across the internet, how can that be controlled and policed . Its a great question, senator. With respect to being efficient in my answer ill simply say that there are going to be limits. Even assuming that we need International Cooperation which i would agree with you, weve already started thinking about ways in which, for example, within the eu, which is already deploying some significant ai regulation, we might design frameworks that are compatible that might require interaction. But ultimately, what i worry about is actually deploying a level of surveillance that weve never before seen in an attempt to perfectly capture the entire chain of ai and thats simply not possible. And i share that concern about privacy which is in part why i raised the point. How can we know what folks are loading. Had a lightweight model once trained to perhaps a device that is not even online anymore. There are limits to what we will know. Anyone of you want to take a stab before i get gavelled out here. I think youre right, a need for International Cooperation, and likeminded governance, at least in the initial years. I think theres a lot that we could learn. We were talking with senator blackburn about swift systems for Financial Transactions and somehow managed globally in the United States for 30 years to have know your customer requirements obligations for banks. Money has moved around the world. Nothing is perfect, thats why we have laws. But its worked to do a lot of good to protect against terrorist or criminal uses of money that would cause concern. Well, i think youre right in that these models are very portable. You could put the parameters of most models, even the very large ones on a large ubs drive and carry it with you somewhere. You could also train them in a data center anywhere in the world. So i think its really the use of the model and the deployment that you can effectively regulate. Its going to be hard to regulate the creation of it. If people cant create them here theyll create them somewhere else. We have to be careful if we want the u. S. To be ahead. Do we want them created here or not where the climate has driven them. Thank you, senator ossoff, i hope youre okay with a few more questions. Weve been at it for a while. Weve been very patient. Do we have a choice . No. [laughter] , but thank you very much. Its been very useful. I want to follow up on a number of the questions that have been asked. First of all, on the international issue, there are examples and models for International Cooperation. Mr. Smith, you mentioned civil aviation, the 737 the 737 max and i think i have it right, when it crashed, it was a plane that had to be redone in many respects and companies, airlines around the world looked to the United States for that redesign and then approval, civil aviation, atomic energy, not always completely effective, but if it worked in many respects. And so i think there are International Models here where, frankly, the United States is a leader by example and best practices are adopted by other countries when we support them and francly, in this instance, the eu has been ahead of us in many respects regarding social media. And we are following their leadership by example. I want to come to the issue of having centers, whether theyre in china or for that matter, elsewhere in the world, requiring safeguards so that we are not allowing our technology to be misused in china against the uighurs and preventing that technology from being stolen or people we train there from serving bad purposes. Are you satisfied, mr. Smith, that it is possible, in fact, that you are doing it in china, that is preventing the evils that could result from doing business there in that way . I would say two things. First, i feel good about our track record and our vigilance and the constant need for us to be vigilant. What services we offer to whom and how theyre used, its really those three things and i would take from that what i think is probably the conversation well need to have as a country, about export controls more broadly. Theres three fundamental areas of technology where the United States is today, i would argue, the Global Leader. First the gpu chips from a company like nvidia. Second, the Cloud Infrastructure from a company like, say, microsoft. And the third is the Foundation Model from a firm such as open ai. And google and other companies are Global Leaders as well. And i think if we want to feel that were good in creating jobs in the United States by inventing and manufacturing here, as you said, senator hawley, which i completely endorse, and good the technologys being used properly, we probably need an export control regime that weaves those three things together. For example, there might be a country in the world, lets just set aside china for a moment and leave that out. Lets just say theres another country where you all and the executive branch would say, we have some qualms, but we want u. S. Technology to be present and we want u. S. Technology to be used properly, the way that would make you feel good. You might say then, well, let nvidia export chips to that country to be used in say a data center of a company that we trust, that is licensed even here for that use, with the model being used in a secure way in that data center with a know your customer requirement and with guardrails that put certain kinds of use off limits. That may well be where government policy needs to go and how the tech sector needs to support the government and work with the government to make it a reality. I think that that answer is very insightful and raises other questions. I would kind of analogize to nuclear proliferation. We cooperate over safety in some respect with some other countries, some of them adversaries, but we still do everything in our power to prevent American Companies from helping china or russia in their nuclear program. Part of that nonproliferation effort is through export controls. We impose sanctions. We have limits and rules around selling and sharing certain choke Point Technologies related to nuclear enrichment, as well as biological warfare, surveillance and other National Security risks and our framework, in fact, envisions sanctions and safeguards precisely in those areas for exactly the reasons weve been discussing here. Last october, the Biden Administration used existing legal authorities as a first step in blocking the sale of some High Performance chips and equipment to make those chips to china and our framework calls for export controls and sanctions and legal restrictions. So, i guess the question that we will be discussing, were not going to resolve it today, regrettably, but we would appreciate your input Going Forward and im inviting any of the listening audience here in the room or elsewhere to participate in this conversation on this issue and others. How should we draw a line on the hardware and technology that American Companies are allowed to provide anyone else in the world . Any other adversaries or friends . Because, as youve observed, mr. Dally, and i think all of us accept its easily proliferated. Yes, if i could comment on this. Sure. You drew analogy to Nuclear Regulation and mentioned the word choke points. I think the difference here, there really isnt a choke point and i think theres a careful balance to be made between limiting where our chips go and what theyre used for and you know, disadvantaging American Companies and the whole food chain that feeds them because, you know, were not the only people who make chips that can do ai. I wish we were, but were not. There are companies around the world that can do it. There are other American Companies, there are companies in asia, there are companies in europe and if people cant get the chips they need to do ai from us, they will get them somewhere else. And what will happen then is, you know, it turns out that chips arent really the things that make them useful, its the software. And if all of a sudden, the standard chips for people to do ai become something from, you know, pick a country, singapore, you know, all of a sudden, the Software Engineers will start writing the software for those chips and they will become the dominant chips and the leadership of that Technology Area was shifted from the u. S. To singapore, whatever other country becomes dominant. So we have to be very careful to balance the National Security considerations and the abuse of Technology Considerations against preserving the u. S. Lead in this Technology Area. Mr. Smith. Yeah, its a really important point and what you have is the argumentcounter argument. Let me for a moment channel what senator hawley often voices that i think is also important. Sometimes you can approach this and say, look, if we dont provide this to somebody, somebody else will so lets not worry about it. I get it. But at the end of the day, you know, whether youre a company or a country, i think you do have to have clarity about how you want your technology to be used. And, you know, i fully recognize that there may be a day in the future after i retire from microsoft when i look back and i dont want to say, oh, we did something bad because if would he didnt, somebody else would have. I want to say, no, we had clear values and we had principles and we had in place guardrails and protections and we turned down sales so that somebody couldnt use our technology to abuse our peoples rights and if we lost some business, thats the best reason in the world to lose some business. Whats true of a company is true as a country. So im not trying to say that your view shouldnt be considered, it should. Thats why this issue is complicated, how to strike that balance. Professor hartzog, do you have any comments . I think that was wellsaid and i would only add that its also worth considering in this discussion about how we sort of safeguard these incredibly dangerous technologies and the risks that could happen if they, for example, proliferated. If its so dangerous, then we need to revisit the existential question again and bring it back not only to thinking about how we put guardrails on, but how we lead by example, which i think you brought up, which is really important and we dont win the race to violate human rights, right . And thats not one that we want to be running. And it isnt simply Chinese Companies importing chips from the United States and building their own data center. Most ai companies ran capabilities from cloud providers and we need to make sure that the cloud providers are not used to circumvent our export controls or sanctions. Mr. Smith, you raised the know your customer rules, knowing your customers would require cloud, ai cloud providers whose models are deployed to know what companies are using those models. If youre leasing out a super computer, you need to make sure that your customer isnt Peoples Liberation army, that it isnt being used to subjugate uighurs, that it isnt used to do facial recognition on dissidents or opponents in iran, for example. But i do think that youve made a critical point, which is there is a moral imperative here and i think theres a lesson in the history of this great country, the greatest in the history of the world, that when we lose our moral compass, we lose our way, and when we simply do economic or political interests, sometimes its very shortsighted and we wander into a geopolitical swamp and quicksand. So i think these kind of issues are very important to keep in mind when we lead by example. I want to just make a final point and then if senator hawley has questions, were going to let him ask, but on this issue of worker displacements, i mentioned at the very outset, i think that we are on the cusp of a new Industrial Revolution. Weve seen this movie before, as they say, and it didnt turn out that well in the Industrial Revolution where workers were displaced en masse, those textile factories and the mills in this country and all around the world went out of business, essentially or replaced the workers with automation and mechanics and i would respond by saying we need to train those workers and provide the education, youve alluded to it. And it neednt be a fouryear college. You know, in my state of connecticut, electric, pratt whitney, defense contractors are going to need thousands of welders, electricians, trades people of all kinds who will have not just jobs theyll have careers that require skills that frankly i wouldnt begin to to how to do. And i havent the aptitude to do and thats no false modesty. So i think there are tremendous opportunities here, not just in the creative sphere that you have mentioned where, you know, we may think higher human talents come into play, but in all kinds of jobs that are being created daily already in this country. As i go around the state of connecticut, the most common comment i hear from businesses, we cant find enough people to do the jobs we have right now. We cant find people to fill the openings that we have and that is, in my view, maybe the biggest challenge for the American Economy today. I think thats such an important point and its really worth putting everything we think about for jobs because i wholeheartedly endorse senator hawley, what you were saying before, we want people to have jobs. Important point. Its really worth putting everything we think about for jobs, because i will certainly endorse senator hawley, what you were saying before about thats, we need, we want people to have jobs, we wanted to earn a good living, et cetera. First, lets consider the demographic context in which jobs are created. The world has jobs are created. The world has populations that are leveling off or much of the world now declining. One of the things we look at is every country and measure over five years isa a working age population increasing or decreasing and by how much. From 20202025 the working age population in this country people age 2064 it will grow by 1 million people. The last time it grew by that small a number, do you know who is a number, do you know who is president of the United States . John adams. Thats how far back you have to go. A country like italy take that group of people over the next 20 20 years, is going to decline by 41 . Whats true of italy is true almost of the same degree in germany. Its already happening in japan, in korea. So we live in a world where for many countries we suddenly encounter what you find i suspect when you go to hartford or st. Louis arkansas city, people cant find enough police officers, enough nurses, enough teachers. That is a problem we need to desperately focus on solving. So how do we do that . I do think ai is something that can help. And even Something Like a call center, one of the things thats faceting to me, we have more than 3000 customers around the World Running groups of concept that one faceting what is a bank in the netherlands. You go into call center today, the desks of the workers look like a Trading Floor in wall street. They have six different terminals, somebody calls, they are desperately trying to find the answer to a question. With Something Like gpt four with our services, six terminals can become one. Somebody who is working there can ask a question, the answer comes up. And what theyre funny is the person whos answering the phone talking to a customer can now spend more time concentrating on the customer and what they need. I appreciate all the challenges. Theres so much uncertainty. We desperately need to focus on skills. But i really do hope that this is an era where we can use this to frankly help people fill jobs, get training and focus more on luscious put it this way, im excited about Artificial Intelligence. Im even more excited about human intelligence. And if we can use Artificial Intelligence to help people exercise more human intelligence, and earn more money, that would be something that would be way more exciting to pursue and everything that we had to grapple with the last decade around say social media and the like. Our framework very much focuses on treatment of workers, about providing more training. It may not be something that this entity will do but its definitely something that has to address. And its not only displacement but also working conditions and opportunities within the workplace for promotion to prevent discrimination, to protect civil rights. We havent talked about it in detail, but we deal with it in our framework in terms of transparency around decisionmaking. China may try to steal our technology, cant steal our people. And china has its own population challenges with the need for more people, skilled labor. But i say about connecticut, we dont have gold mines oil wells. What we have is really able workforce. Thats good to be the key to a think americas economy in the future, and a i can help promote development of that workforce. Senator hawley. You all have been really patient, and so has our staff. I want to thank our staff for the searing. But most importantnt were going to continue these hearings. It is so helpful to us. I can go down our framework and tie the proposals to specific comments made by sam altman or others have testified before, and we will enrich and expand our framework with the insights that youve given us. So i want to thank all of our witnesses, and again look forward to continue our bipartisan approach here. You made that point, mr. Smit. We have to be bipartisan and adopt full measures, not half measures. Thank you all. This hearing is adjourned. [inaudible conversations] [inaudible conversations] [inaudible conversations] [inaudible conversations] [inaudible conversations] [inaudible conversations] [inaudible conversations] [inaudible conversations] [inaudible conversations] [inaudible conversations] [inaudible conversations] [inaudible conversations] [inaudible conversations] [inaudible conversations] [inaudible conversations] [inaudible conversations] [inaudible conversations] House Republican leaders are holding a conference with reporters on capitol hill. It comes a day after speaker count mccarthys announcement of an impeachment inquiry into President Biden. You are watching live coverage here on cspan2. [inaudible conversations] [inaudible conversations] [inaudible conversations] [inaudible conversations] [inaudible conversations] [inaudible conversations] [inaudible conversations]

© 2024 Vimarsana

comparemela.com © 2020. All Rights Reserved.