comparemela.com

Because i think youve provided objective fact based views on what the dangers are and the risks, and potentially even human extinction, an existential threat which i mentioned by many more than just the three of you, experts who know firsthand the potential for harm. But these fears need to be addressed. I think they can be addressed through many of the concessions that you are making to us and others as well. Ive come to the conclusion that we need some kind of regulatory, not just a reactive body. Not just a passive rules to the road maker, edicts on what guardrails should be. But actually investing proactively in research so that we develop countermeasures against the kinds of autonomous apple controlled scenarios that are a potential danger. Artificial intelligence device is in effect programed to resist any turning off. A decision by a. I. To begin Nuclear Reaction to a nonexistent attack. The white house certainly has recognized the urgency with a historic meeting of the seven Major Companies which made families ignition commitments. I command and thank the president of the United States for recognizing the need to act. But we all know, and you have pointed out in testimony and that these commitments are and specific and unenforceable. A number of them on the most serious issues say that they will give attention to the problem. All good, but its only a start. I know the doubters about congress and about our ability to act. But the urgency here demands action. The future is not Science Fiction. Its not even the future, its here and now. A number of you have put the timeline at two years before we see some of the biological, most severe dangers. It may be shorter because the kinds of pace of development is not only stunningly fast, it is also accelerated at a stunning pace because of the quantity of chips, the speed of chips, the effectiveness of algorithms. It is and flow of development. We can condemn it. We can regret it. But it is real. And the white houses principal aptly aligns with a lot of what we have said, among us, in congress and notably in the last hearing we held. We are here now because a. I. Is already having a Significant Impact on our economy, safety, and democracy. The dangers are not just extinction, but the loss of jobs. One of potentially the worst nightmares that we have. Each day, these issues are more common, more serious, and more difficult to solve. And we cant repeat the mistakes we made on social media, which was to delay and disregard the danger. So the goal for this hearing is to lay the ground for legislation, go from general principle to specific recommendations, to lead this hearing to right, real laws, enforceable laws. In our past two hearings, we heard from panelists that section 2 30, the legal shield that protects social media should not apply to a. I. Based on that feedback, senator hawley and i introduced the no section 2 30 immunity for a. I. Apps, building on our previous hearing, i think there are our core standards that we are building bipartisan control consent around. I welcome the hearing from many others on the potential rules establishing a licensing regime for companies that are engaged in high risk a. I. Development. A testing and auditing regiment by objective third parties or by, preferably, the new entity that we will establish, imposing legal limits on certain uses related to elections. Senator klobuchar has raised this danger directly, related to nuclear warfare. China apparently agrees that a. I. Should not govern the use of nuclear warfare. Requiring transparency about the limits and use of a. I. Models. This includes water marking, labeling, disclosure when a. I. Is being used, and data access. Data access for researchers. So i appreciate the commitment that has been made by anthropogenic, openai, and others, at the white house related to security testing last week. It shows these goals are achievable. And that they will not stifle innovation which has to be we need to be creative about the kind of engine c or entity, the body administration, the administration, the office, i think the language is real, and forced the power. And the resources invested in it. We are really lucky, very fortunate to be going by three true experts today. One of the most distinguished panels ive seen in my time in the United States congress, which is only about 12 years. One of the leading a. I. Companies, which was founded with the goal of developing a eye that is helpful, honest, and harmless. A researcher who did groundbreaking work that led him to be recognized as one of the fathers of a. I. And a Computer Science professor whose publications and testimony on the ethics of a. I. Have shaped regulatory efforts by the eu a. I. Act. Welcome to all of you and thank you so much for being here. I turn to the Ranking Member, senator hawley. Thank you very much, mister chairman. Thanks to all our witnesses for being here. I want to start by thanking the chairman, senator blumenthal, for his terrific work on this hearing. Its been a privilege to work with. Him these have been incredibly substantive hearings. Im looking forward to hear from each of you today. I want to thank his staff for their terrific work. It took a lot of effort to put together the hearings of such substance. I want to saying thank senator blumenthal for willing to be doing something about this problem. As he alluded to a moment, ago he and i feel weeks go introduced the first bipartisan bill to put safeguards around a. I. Development. And the first bill to be introduced in the United States senate, which will protect the rights of americans, to vindicate their privacy, their personal safety, and their interests in court against any company that would develop or employee. This is an absolutely critical foundational right. You can give american paper rights, parchment rights, as our founder said, all you want. If they cant get into court to enforce them, they dont mean anything. I think it is significant that our first bipartisan efforts to guarantee every american will have the right to vindicate their rights, their interests, their privacy, their data protection, their kids safety in the courts. I look forward to more to come with senator blumenthal, other members i know are interested in this. I think for my part i have expressed my own sense of what our priorities ought to be when it comes to legislation, very simple. Workers, kids, consumers, and National Security. As a eye develops, weve got to make sure we have safeguards in place that will ensure this new technology is actually good for the American People. Im confident it will be good for the companies. I have no doubt about that. The Biggest Companies in the world who currently making money hand over fist in this country and benefit from our laws, i know they will be great. Google, microsoft, meta. Many of whom have invested in the companies we will talk to. Today we will get to that a bit more in just a minute. Im confident they will do great. What im less confident of his the American People are going to do all right. So im less interested in the corporations profitability. In fact, im not interested in that at all. I am interested in protecting the rights of American Workers and American Families in American Consumers against these Massive Companies that threaten to become a total lock to themselves. You want to talk about dystopia . Imagine a world in which a. I. Is controlled by one or two or three corporations that are basically government under themselves and then the United States government a foreign entity. Talk about a massive accretion of power from the people to the powerful. That is the true nightmare and for my money, and that is what this body has got to prevent. We want to see Technology Developed in a way that actually benefits the people, the workers, the kids, the families of the country. And i think the real question before congress is, will Congress Actually do anything . As senator blumenthal put his finger on precisely. Look at what this congress did or did not do with regard to the very same company, in the same behemoth companies when it came to the social media. Its all the same players, lets be honest. We are talking about the same people. A. I. Is like social media. Its google again, its microsoft, its meta, all the same people. And what i notice is, in my short time in the senate, theres a lot of talk about doing something about big tech. Absolutely zero movement to actually put meaningful legislation on the floor of the United States senate and do something about it. I think the real question is, will the senate actually act . Will that leadership in both parties, both parties, will it actually be willing to act . Weve had a lot of talk. But now is the time for action. I think the urgency of the new generative a. I. Technology does not make that clear to folks, then you will never be convinced. To me, that really defines the urgency of this moment. Thank, you mister chairman. I am going to turn to senator klobuchar, in case she has some remarks. Thank you. A woman of action, i hope, senator hawley. Someone who has invested a lot of time. I want to thank both of you for doing this. I mostly did want to hear from the witnesses. I do agree with both senator blumenthal and senator hawley, this is the moment. The fact that this hasnt been bipartisan so far in the work that senator schumer, senator young are doing, the work that is going on in this subcommittee with the two of you, and the work senator hawley and i are also engaged in on some of the other issues related to this. I actually think that if we dont act soon we could decay into not just partisanship but inaction. The point senator hawley just made his right. We didnt get ahead of, the congress didnt get ahead of section 230 and alike, and the things that were done for maybe good reason at the time and didnt do anything. Now youve got kids getting addicted to fentanyl and youve got officers that get online, our privacy issues, youve got kids being exposed to content they shouldnt. See if got Small Businesses that have been pushed out and the like. I think we can fix some of that still. But this is certainly a moment to engage. Im actually really excited about what we can get done, the potential for good here. What we can do to put in guardrails and have an american way of putting things in place and not just defer to the rest of the world, which is what is starting to happen on some of the other topics i raised. Im particularly interested, not as much our focus today, on the election side and democracy, and making sure that we do not have these ads and the real people. I dont care what the real Political Parties people are with, that we give voters information they need to make a decision and that we are able to protect our democracy. There is good work being done on that front. So thank you. Let me introduce the witnesses and seize this moment to let you have the floor. We will be joined by del rio who is the ceo of anthropogenic, an a. I. And research company. Its a puffin benefit combination dedicated to building durable a. I. Systems that people can rely on and Generating Research about the opportunities and risks of a. I. Anthropic a. I. Assistance is based on its research into training helpful, honest, and harmless a isoms. It is a recognized worldwide recognized leading expert in Artificial Intelligence. He is known for his conceptual and engineering breakthroughs in Artificial Neural Networks he pioneered made many of the lead to this point today. Hes a full professor in the department of Computer Science at the university of montreal and the finder ands quebec Intern National institute, one of the largest academic in deep learning and one of the three federally funded [interpreter] of excellence in a. I. Research and innovation in canada, with im not going to repeat all the awards, recognition that youve received because it would probably take the rest of the afternoon. We are also honored to be joined by stuart russell. He received his b. A. With first class honors in physics from Oxford University in 1982 and a ph. D. In Computer Science from stanford in 1986. He joined the faculty of the university of california berkeley where he is a professor and formally leach air of Electrical Engineering and Computer Sciences and the holder of the smith chair and engineering director of the center for human compatible a. I. And director of the happily center for ethics, science, and the public. Hes also served as an adjunct professor of neurological surgery at you see san francisco. Again, many honors and recognitions, all of you. In accordance with the custom of our committee, im going to ask you to stand and take an oath. Do you solemnly swear that the testimony you are about to give is the truth, the whole truth, and nothing but the truth, so help you god . Thank you. Mr. Amodei, we will begin with you. Excuse me. Chairman blumenthal, Ranking Member hawley, and members of the committee, thank you for the opportunity to discuss the risks and oversight of a. I. With you. Anthropic is a Public Benefit corporation that aims to lead by example in developing techniques to make a i systems safer and more controllable. By deploying these safety techniques and stateoftheart models. Research conducted by anthropic includes constitutional a. I. , a message for training a. I. Systems according to explicit principles. Early work on adversarial testing of a. I. Systems to uncover bad behavior, and foundational work in a i interpret ability, science trying to understand why a. I. Can behave the way it does. This, month after extensive testing, we were proud to launch our a. I. Model for u. S. Users. , many of these safety improvements into practice, while we are the first to admit that our measures are still far from perfect, we believe they are an important step forward in a race to the top on safety. We hope we can inspire other researchers and companies to do even better. A. I will help our country accelerate progress in medical research and many other areas. The opening remarks, the benefits are great. I would not have founded anthropic if i did not believe ais benefits could outweigh its risks. However, it is very critical that we address the risks. My written testimony covers three categories of risk. Shortterm risk we face right now, such as biased, privacy, misinformation. Medium term risk related to misuse of a ices them as they become better at science and engineering tasks, and long term risks relate to weather models might threaten humanity as they become truly autonomous. We also mentioned this in the opening system one. In the short remarks, i want to focus on the medium term risk, which present an alarming combination of evidence and severity. Specifically, anthropic is concerned a. I. Could empower a much larger set of actors to over the last six months, anthropic, in collaboration with world class bio security experts, has conducted a study of the potential for a. I. To contribute to the misuse of biology. Today, certain steps in via weapons production involve knowledge that cant be found on google are in textbooks and acquires high level of high level expertise. This is one of the things that is currently from attacks. We found that todays a. I. Tools can fill in some of the steps, all billy its incompletely and an reliably. In other words, they are showing the first signs of danger. However, a straightforward extrapolation of todays systems, to those we expect to see in 2 to 3 years, suggests a substantial risk that a. I. Systems will be able to fill in all the missing pieces, enabling many more actors to ferry out largescale biological attacks. We believe this represents a grave threat to u. S. National security. Weve instituted mitigations against these risks in our own deployed models, briefed a number of u. S. Government officials, all of whom found the results disquieting. They are piloting a response of the disclosure process with other a. I. Companies to share information on this and similar risks. However, private action is not enough. This risk and many others like it requires systemic policy response. We recommend three broad classes of actions. First, you the u. S. Must secure the a. I. Supply chain in order to maintain its lead while keeping these technologies out of the hands of bad actors. The supply chain runs from Semiconductor Manufacturing equipment to chips, and even the security of a. I. Models stored on the servers of Companies Like ours. Second, we recommend testing and auditing regime for a new and more powerful model. Similar to cars or airplanes, a i models of the near future will be powerful machines that possess great utility but can be lethal and designed incorrectly or misused. New a. I. Models should have to pass a rigorous battery of safety test before they can be released to the public at all, including tests by third parties and National Security experts and government. Third, we should recognize science testing and auditing for a. I. Systems. It is not currently easy to detect all the bad behaviors in a. I. System is capable, of without first broadly deploying it to users, which is what creates the risk. Thus, it is important to find both measurement and research measurements, to ensure testing and auditing that is actually effective. Funding this and the national a. I. Resource are two examples of ways to ensure america leads here. The three directions involved are responsible supply chain fallacies help give america a lot of rigorous standards on our own companies without giving our National Lead to adversaries, and it makes these rigorous standards meaningful. The balance between mitigating a. I. s risk and maximizing its benefits will be a difficult one. But im confident our country can rise to the challenge. Thank you. Thank you very much. Why dont we go to mr. Bengio. Chairman blumenthal, Ranking Member, hawley members of the judiciary committee, thank you for the invitation to speak today. The capabilities of a. I. Systems have steadily increased over the last two decades, thanks to advances and deep learning, with i and others introduced. Why this revolution has the potential to enable tremendous progress in innovation, it also entails a wide range of risks. From immediate ones, like discrimination, to growing ones like disinformation, and even more concerning ones in the future, like loss of control of superhuman a. I. Recently, i and many others have been surprised by the giant leap realized by systems like chatgpt, to the point where it becomes difficult to discern whether one is interacting with another human or a machine. These advancements have led many top a. I. Researchers, including myself, to revise our estimates of when human level intelligence could be achieved. Previously thought to be decades or even centuries away, we now believe it to be within a few years, or decades. The shorter timeframe, say five years, is really worrisome because we will need more time to effectively mitigate the potentially significant threat to democracy, National Security, and our collective future. As some have said, here if this technology goes, wrong it could go terribly wrong. These severe risks could arise either intentionally, because of malicious actors using a. I. Systems to achieve harmful goals, or unintentionally, if an a. I. System develops strategies that are misaligned with our values. I would like to emphasize for factors that governments can focus on in their regulatory efforts to mitigate all a. I. Harm and risks. First, access. Limiting who has access to powerful a. I. Systems, structuring the appropriate protocols, duties, oversight, and incentives for them to act safely. Second, alignment. Ensuring that a. I. Systems will act as intended in agreement with our values and norms. Third, raw intellectual power. It depends on the level of sophistication of the arguments and the scale of Computing Resources and data sets. And fourth, scope of action. The potential for harm and i i system can affect indirectly, for example through human action, or directly, for example through the internet. Looking at risk through the lens of each of these four factors, access, alignment, power, and scope of action, it is critical to designing appropriate government intervention. I firmly believe that urgent effort, preferably in the coming months, are required in the following three areas. First, the coordination of highly agile national and International Regulatory frameworks and incentives that bolster safety. This would require licenses for people and organizations with standardized duties to evaluate and mitigate potential harm. Allow independent artists and restrict a. I. Systems with unacceptable levels of risk. Second, because the current methodology is not demonstrably safe, significantly increase Global Research and diverse focus on a. I. Safety, making the informed creation of regulation, protocols, safety high technologies and governing structures. A third, research on countermeasures to protect society from potential rogue a. I. No regulation is going to be perfect. This research in a. I. And interNational Security should be conducted with several highly secure and decentralized labs, operating under multiNational Oversight to mitigate an a. I. Arms race. Given the significant potential for detrimental consequences, we must therefore allocate potential Additional Resources to safeguard our future, at least as much as we are collectively, globally investing in increasing the capabilities of a. I. I believe we have a moral responsibility to mobilize our greatest minds and make major investment in the world and internationally coordinated effort to fully reap the economic and social benefits of a, i20 protecting society and our shared future against this potential terror. Thank you for your attention to this pressing matter. I look forward to your questions. Thank you very much. Professor russell. Thank, you chair blumenthal and Ranking Member hawley and members of the subcommittee, for the invitation to speak today and your excellent work on this vital issue. A. I. , as we all, know is the study of how to make machines intelligent. Its stated goal is the general purpose of Artificial Intelligence, sometimes called a g. I. Or artificial general intelligence. Machines that match human capabilities in every relevant dimension. The last 80 years have seen a lot of progress toward that goal. For most of that time, weve created systems who use International Internal operations we understood. Drawing on centuries of work in mathematics, statistics, philosophy, and operations research. Over the last decade, that has changed. Beginning with vision and speech recognition, now that language, the dominant approach has been and and training with billions or trillions of adjustable parameters. The success of these systems is undeniable, but their internal principles of operation remain a mystery. This is potentially particularly true for the large language models, or llms, such as chatgpt. Many researchers now see agi on the horizon. In my view, they are a piece of the puzzle. We are not sure what shape it will take yet or how it fits into the puzzle. But the field is working hard on those questions and progress is rapid. If we succeed, the upside could be enormous. Ive estimated a cash value of at least 14 quadrillion dollars for this technology. A huge magnet in the future pulling us forward. On the other hand, injuring the founder of Computer Science warned in 1951 that once a. I. Outstrips our peoples powers, we should have to expect machines to take control. We have pretty much completely ignore this warning. Its as if an alien civilization warned us by email of its impending arrival and we replied, humanity is currently out of the office. Fortunately, humanity is now back in the office and has read the email from the alien. Of course, many of the risks from a. I. Are well recognized already, including bias, disinformation, manipulation, and impacts unemployment. Im asking you to discuss any of these. Most of my work over the last decade has been on the problem of control. How do we maintain power forever over entities more powerful than ourselves . The core problem weve studied comes from a. I. Systems pursuing six objectives that are missed specified, the socalled king midas problem. For, example social media algorithms were trained to maximize clicks and learn to do so by many relating human users and polarizing societies. But with llms, we dont even know what their objectives are. They learn to imitate humans and probably absorb all human goals in the process. Now, regulation is often said to stifle in the innovation. But there is no real tradeoff between safety and innovation. A. I systems that harm human beings are simply not good a. I. I believe analytics predictability is as essential for safe a. I. As it is for the autopilot on an airplane. This committee has discussed ideas such as thirdparty testing, licensing, National Agency and international coordinating bodies, all of which i support. Here are some more ways to, as i said, move fast and fix things. First, an absolutely right to know if one is interacting with a person or a machine. Second, no algorithm that can decide to kill human beings. Particularly when attached to a nuclear weapon. Third, a kill switch that must be activated if systems break into other computers or replicate themselves. Fourth, beyond the voluntary steps announced last friday. Systems that break the rules and must be recalled from the market. For anything from real individuals to helping terrorists build biological weapons. Developers may argue that preventing these behaviors is too hard, because llms have no notion of truth and are just trying to help. This is no excuse. Eventually, and the sooner the better, i would say, we will develop forms of a. I. That are provably safe and beneficial, which can then be mandated. Until then, we need real regulation and a pervasive culture of safety. Thank you. Thank you very much. I will begin the questioning. We will have a sevenminute round. I expect we will have many more than one, given the challenge and the complexity that you have raised so eloquently. I have to say, professor russell, you also, in your testimony, the written testimony, recount a remark of lord respirators, of timber 11th, 1933. A conference when he was asked about the Atomic Energy. He said, quote, anyone who looks for a source of power in the transformation of the adams is talking moonshine. And quote. The idea about the limits of human ingenuity have been proven wrong again and again and again. Weve managed to do things people thought were unthinkable, whether its the manhattan project, the guidance of robert oppenheimer, who now has become a bold face popular prince, or putting a man on the moon, which many thought was impossible to do. We know how to do big things. Its the big things we must do and we have to be back in the office to answer that email that is in fact a siren blaring for everyone to hear and see a. I is here and beware of what it will do if we dont do something to control it. And not just in some distant point in the future, but as all of you have said, with a time horizon that would have been thought unimaginable a few years ago, unimaginably quick. Let me ask each of you, because part of that time horizon is the next election. Its in 2024. If there is nothing that focuses the attention of congress, it is an election. Nothing better than an election to focus the attention of congress. Let me ask each of you what you see as the immediate threat to the integrity of our elections system, whether it is misinformation or manipulation of electoral counts or the possible areas where you see an immediate danger if we go into this next election. I will begin with you, mr. Amodei. Yes. Thanks for the question, senator. You know, i think this is obviously a very timely thing to worry about. When i think of the risks, here my mind goes to misinformation, generation of deepfakes, use of a. I. Systems to manipulate people or produced propaganda or just do anything deceptive. I can speak a little bit about some of the things we are doing. We train our model with constitutional a, where you can lay out the principles, it doesnt mean the model will follow the principles, but there are terms in our constitution which is publicly available that tells the model not to generate misinformation. The same is true in our business terms of use. One of the commitments with the white house was to start to watermark content, particularly in the audio and visual domains. I think that is very helpful. It also benefit from water marking gives you the technical capabilities to detect that something is a. I. Generated, but requiring it on the side of the law to be labeled would be something that would be very helpful and timely. Thank you. Mr. Bengio. [inaudible] i agree with all of that. I will add a few things. One concern i have is that even if Companies Use water marking, especially because there is now several open source versions to train llms or use them, including the weights that have been made available to the global community, we also need to understand how things pull out of that front, in other words people are not going to obey that law. One important thing im concerned about is, one can take a pretrained model by a company that made it public and then without huge Computing Resources, not 100 million cost it takes to train them, but something very cheap can tune these systems to a particular task, which could be to play the game of being a troll, for example. This is an example of that, to train them all. Or other examples in generating deepfakes that is likely more powerful, that would be seen up to now. I dont know how to fix, this but i want to bring that to the attention of this committee. Thank you. Well, on that point, and on both of the excellent points that you i, mean one immediate fixes to avoid releasing more of these pre trained large models. That is the thing government can do. Right now, very few companies, including you brought last week can do that. And that is at least where government can act. Professor russell . I would certainly like to the remarks of the other two witnesses. I would say my major concern with respect to elections would be disinformation, particularly external influence campaigns. Because, with these systems, we can present to the system a great deal of information about an individual. Everything theyve ever written are published on twitter or facebook, the social Media Presence there,. And you can train the system and ask it to generate a disinformation campaign, particularly for that person. And then we can do that for 1 Million People before lunch. And that has a far greater effect than the spamming and broadcasting of false information that isnt tailored to the individual. I think labeling is important. For text, its going to be very difficult whether a short piece of text it is machine generated, if someone does not want you to know that its machine generated. I think an important proposal from the Global Partnership on a. I. Is for a kind of escrow and encrypted storage where every output from the model is stored in encrypted form, enabling, for example, a platform to check whether a piece of text that is uploaded is actually machine generated by testing it again at the escrow storage without revealing private information, et cetera. That could be done. Another problem we faced is that there are many, many extremely well intended efforts to create standards around labeling and how platforms should respond to labels in terms of what should be posted. Media organizations like that bbc, the new york times, wall street journal, et cetera, et cetera, there are dozens of these coalitions. The effort is very fragmented. There are many standards as there are coalitions. I think it really Needs National and probably interNational Leadership to bring these together and have pretty much a unified approach and standards that all organizations can sign up to. And thirdly, i think theres a lot of experience in other theaters, such as the equity market, in real estate, in Insurance Business where truth is absolutely essential. If you take the equity market, companies can make up their quarterly figures, then the equity market collapses. We developed this whole regulated thirdparty structure of accountants, audits so that the information is reasonably trustworthy. In real estate, we have title registries, notaries, all kinds of stuff to make it work. We dont really have that structure in the Public Information sphere. And we see, again, its very fragmented, the fact checked i suppose elon musk is going to have his gpt and so on. Again, this is something i think government can help in terms of licensing and standards for how those organizations can function. And again, what platforms do with the information that the Third Party Institution supply to enable users to have access to high quality information streams. I think theres quite a lot we can do, but its pretty urgent. Thank you. I think all these points argue very, very powerfuly against fragmentation or some kind of single entity that would establish oversight standards, enforcement of rules. Because as you say, actors can not only eliminate quarterly reporting, they can also make up numbers for corporations. That would disastrously impact the stock of the corporation. If i might add one point, we are absolutely not talking about a ministry of truth. In some sense, it is similar to what happened in the courts. The courts had standards to find out what the truth is, but they dont say with the truth is. Thats what we need. Protecting our election system has to be a priority. I think all of you are very, very emphatically and cogently making that point. I would like to add one suggestion, which may sound drastic. Even if you look at other fields like banking, in order to reduce the chances that a. I. Systems will massively influence voters through social media, one thing that shouldve been an a long time agos social media accounts should be restricted to actual human beings that have identified themselves. Ideally in person. Right now, social Media Companies are spending a lot of money to figure out whether an account is illegitimate or not. They will not, by themselves, force if these kinds of regulations because its going to create friction to recruit more users. But of the government says Everyone Needs to do it, they will be happy. Not them, but thats what i would thank you. Senator hawley. Let, start if we could, and by talking about who controls this technology currently and who is developing it. If i could just start with you, mister amodei, to understand some of the structure of your company anthropic, google has a significant stake in your company, doesnt it . Yes. Google was an investor in anthropic. They dont control any board seats, but yes google is an investor in anthropic. Give us a sense, what are we talking about . What kind of steak are we talking about . I dont remember exactly, i couldnt to give it to you exactly. I suspect low double digits, but i would need to follow up on that. The press has reported at 300 million, an investment with at least a 10 stake in the company. Does that sound probably correct . That sounds broadly correct. That is a pretty big steak. Lets talk about open a. I. , where you used to work. Right . Yes. Openai, its been reported, has a very significant chunk of funding from another Massive Technology company, microsoft. Its been reported in the press that this is one of the reasons you left the company. You were concerned about this. I will let you speak to that if you want to. I dont want to put words in your mouth. But this take i believe microsoft has reported to have in openai approached 49 . Its not controlling, but its awfully, awfully close. Tell me this. When google stake in your company occurred, the Financial Times broke the story on this, they reported the transaction was not publicized when it actually happened. Why was that . Do you know . I couldnt speak to the yeah, i couldnt speak to the decisions made by google here. I do want to make one point, which is our relationship with google at the present time is primarily focused on hardware. In order to train these models, we need to process chips. This investment came with a commitment to spend on the cloud. Our relationship with google has been primarily focused on hardware. Primarily, its been commercial or involved with governance. So there is no plans to integrate your clause, to ensure that with google search, for example . Thats not occurring at the present time. I know its not occurring. But are there plans to doing, i guess is my question . I cant speak to what i cant speak to what the possibilities are for the future, but thats not something thats occurring at the present. Dont you think that would be frightening . Just to go back to something professor russell said a moment ago. He talked about the ability in the election context of a. I. , the information from one political figure, everything about that person. The ability to come up with a very convincing misinformation campaign. Now imagine if that technology also is the same large language model, for example, also has the information, the voter files of millions of voters and knew exactly what would capture those voters attention, what argument they found most persuasive, the ability to weaponize this information and target it toward particular voters would be exceptionally powerful, right . No surge is all about getting and keeping users attention. Its how google makes money. Im just imagining your technology, generative a. I. , aligned and integrated and folded into search, the power that would give google to get users attention, keep their attention, push information to them, it would be extraordinary, wouldnt it . Yes. I think, senator, i think these are very important issues. I want to raise a few points here. I want to some of the things i said in response to senator blumenthals questioning, which, is on misinformation. We put terms in the constitution that tell it not to generate misinformation or political bias. I want to emphasize over again that these methods are not predicting the science. Its not exact, its something we work on. I think you are also getting some important privacy issues about personal formation here. This is an area where also in our constitution we discourage our models from producing personal information. We dont train on publicly available, publicly available information. Its a very corridor mission to produce models that dont that at least try not to have these problems. What you said, you tell the model not to produce misinformation. Im not sure exactly what that means. Can you tell it not to help Massive Companies make a profit . This would be goggles interest, above, all profits. The whole reason they want to get users attention and keep users attention and keep us searching and scrolling is so they can push products to us and make lots of money, which they do. It seems to me that your technology melded with their could make them an enormous amount of money. Would that be so good for the American Consumer . I cant speak to the decisions made by the different Companies Like google. But we are doing the best we can to make our systems ethical. You, know in terms of how do we tell our model not to do things, there is a training process where we train the model in a loop to tell it for some given output, you know, is your response in line with these principles . Over the last six months, since we developed this method of constitutionally, i weve gotten better and better at getting the models in line with the constitution. I would still say it is not perfect but we very much focus on the safety of the model. So it doesnt do the things you are concerned about, senator. Listen, i think this is an important point. I want to underscore this. I think its important. I appreciate you want your models to be ethical and so forth. Thats great. But i would just suggest that that is in the eye of the beholder, and the top of what is ethical or what is appropriate is going to really very significantly determined by or depending on who controls the technology. Im sure that google or microsoft using these generative models, linking it up with their adbased models would, say its perfectly ethical for us to try and get the attention of as many consumers as possible by any means possible and hold it as long as pasta. They would say there is no problem with a. That is not misinformation. Thats business. Would that be good for American Consumers . I doubt it. With that be respectful of American Consumers privacy and their integrity . It wouldnt prevent them, or would it protect them, rather, from manipulation . I doubt it. Weve gotta give serious thought here to who controls this technology and how they are using it. I appreciate all you are doing, i appreciate your commitments. I think its great. I just want to underline this is a very serious structural issue here that we are going to have to think hard about and the control of this technology, by just a handful of companies, and governments, is a huge, huge problem. Hopefully we come back to this next year. Thanks senator hawley. Senator klobuchar. Thank you very much. I chair the rules committee. We are working on a number of pieces of legislation. I really appreciate working with senator hawley on some of this. One bill was watermarked, in making sure that the election materials produced by a. I. I dont think thats enough. When you look at the fact that someone is going to watch a fake joe biden or fake donald trump or fake elizabeth warren, all this has really happened, and then not know who the persons, and not know if its really them. Its not going to help at the very end. It might for some things. By the, way that was produced by a, i hope you saw the little mark at the end that says that. Could you address that, professor russell . How within the clear confines of the constitution for things like that, we will have to do more than just watermark . I do want to be careful not to veer into, once again, the ministry of truth idea. But i think clear labeling, i mean, if you look at what happened with credit cards, for example, it used to be that credit cards came with 14 pages of tiny, tiny print. That allowed companies to rip off the consumer or client. Eventually, congress said, no, theres gotta be a disclosure. Youve gotta say, this is the interest rate, and this is the grace period, this is the late fee, and a couple of other things. That has to be in big print on the front of the envelope or on the front page. There are very strict rules now about how you direct markets, credit cards and other lending products. Thats been enormously beneficial. That should be allowed competition on those primary features of the product. You cant really compare a credit card to someone telling the United States of america that there is some kind of a Nuclear Explosion when there isnt. Right. But the point being we can mandate much clearer labeling than just a little thing in the corner at the end of a 92nd piece, right . We could say for example, theres got to be a big red frame outside the image when its a machine generated image. Professor bengio, what do you think . Well, my view on this is we should be very careful with anything, any kind of use of a. I. For political purposes, political advertising. Whether it is done officially through some agency that does advertising or in a more direct way. It might not be actual advertising. Its just put up for circulation. That is always what is confronted. The federal Election Commission has asked for authority, including the republican members to do more. Go ahead. In many countries, any kind of advertising, which would include disseminating it is not allowed for some period before the election, to try to minimize the potential effect of these things. Right. Mr. Amodei, one significant switching gears here because i talked to some people in the Banking Community about this, small banks, they are really worried they will see a i used to scan people, for attending to be your moms voice or more likely or granddaughters voice, actually getting that voice right, making the call for money. How can congress ensure that companies that create a. I. Platforms cannot be used for those deceptive platforms so that doesnt happen . Yes, senator. I think these questions about deception and scam are probably closely related to these questions about misinformation, right . Yeah. They are a bit two sides of the same coin. On the misinformation, i wanted to clarify, there are technical measures and policy measures. What are marking is a technical measure. Watermarking makes it possible to take the a. I. System, run it through some automated process that will return an answer that it was generated by a. I. Or not generated by a. I. Thats important and we are working on that and others are working on that. I think we also need policy measures. Going back to what the other two witnesses said, focusing on a requirement to label a. I. Systems is not the same as a requirement to watermark them. One is the designer of the a. I. System to embed something. The other is for whatever in the end, someone is required to label it. I think we need both and probably this congress can do more on the second thing that the companies and researchers can do on the first thing. Okay. What are you talking, about the scams where the granddaughter calls the grandma, goes out and takes her money out. Are we going to let that happen . Certainly, its already illegal to do that. I can think of a number of authorities that refused to strengthen that for a guy in particular. I think that is kind of up to the senate and congress to figure out what the best measure is. Certainly im in favor of strength and protection there. I hope so. About half of the states have laws that give them control over the use of their name, image, and voice, but in the other half of the country someone who is harmed by a big recording purporting to be them as little recourse. We just did a hearing on this. Would you support a federal law, mr. Bengio, that gives individuals control of the use of their name, image, and voice . Certainly. But i would go further. If you think about counterfeit money, the criminal penalties are very high. That deters a lot of people. When it comes to counterfeiting humans, it should be at least at the same level. Okay. One last thing i wanted to ask about here is just the ability of researchers to be able to figure out what is going on. A number of us are supporting including senator blumenthal and that allows for researchers to have the transparency that we need, including senator cassidy, cornyn, rodney, its the platform Transparency Act to require social Media Companies to share data with researchers. So we can try to figure out what is happening with the algorithms and the like. Dr. Russell, why is researcher access to social media platform data so important for regulating a. I. . In my experience, actually, it involved three years of negotiating and agreement with the large platforms, only to be told at the end that they didnt want us to do this collaborative agreement after all. They dont really have three years to spare on a. No, we dont. I havent discussed this with the director of the Digital Division of oecd. He said i was about the tenth person who was told the same story. It seems there is a modus operandi of appearing open to collaboration with researchers only to terminate that collaboration before it begins. Thereve been claims that they provided open data sets to researchers to allow this type of research. But ive talked to those researchers and it has not happened. We will have it put in place, these regulations. We know that we cant wait for you to get all the data. We cant let it take three years. But putting in place clear mandates the data be, shared why is that helpful . Because the effects of, for example, the social media recommended system are correlated across hundreds of millions of people. Those systems can shift Public Opinion in ways and that is not necessarily deliberately. They are probably not deliberate. But they can be massive and polarizing. Unless we have access to the data, which the company internally certainly do, and i think the facebook revelation from a few years ago suggested that they are totally aware of what is happening. But that information is not available to governments and researchers. I think in a democracy, we have a right to know if our democracy is being subverted by an algorithm. That is absolutely crucial. All right. Im going to ask one more thing. Trying to respond your question from another angle. Why researchers i would say academic researchers, not all of, them many of them dont have any commercial ties. They have a reputation to keep. Others continue their career. They are perfect but i think its a very good yardstick to judge that. Except for professor russell. Okay, very good. Do you agree with it too, then . I just want to say i think transparency is important even as a broader issue. A number of r Research Efforts go into looking inside to see what happens inside the a. I. Systems, why they make the decisions they make. Im going to turn over to my colleague who is impatiently waiting. Thank you. We will circle back to the black box algorithm, which is a major topic of interest. Senator blackburn. Thank, you mister chairman. Thank you all for being here. Mr. Amodei, i think you got a little aggravated trying to answer senator hawleys question about something you may create that you think of as an ethical use. Let me the unethical use. Senator blumenthal have work and i have worked together for nearly four years on looking at social media and the harms that have happened to our nations youth. Hopefully, this week, this act comes out of committee. It wasnt intended. Social media wasnt intended to the intent was not to harm children, cause Mental Health crises, put children in touch with drug dealers and pedophiles. But we have heard story after story and have uncovered instance after instance where the technology that was used in a way that nobody ever thought it was. And now we are trying to clean it up because weve not put the right guardrails in place. As we look at a, i the guardrails are very important. Professor russell, i want to come to you. Because the u. S. Is behind, we are really behind our colleagues in the eu, the uk, new zealand, australia, canada, when it comes to Online Consumer privacy. And having a way for consumers to protect that name, image, voice, having a way for them to protect their data, their lives so that a. I. Is not trained on their data. So talk for just a minute about how we keep our position as a Global Leader in generative a. I. , and at the same time protect consumer privacy. What a federal privacy Standard Health . What are your recommendations there . I think there needs to be absolutely a requirement to disclose if the system is harvesting the data from individual conversations, and my guess is that immediately people would stop using a system that says i am taking your conversations anyone in the country can potentially listen in on this conversation. Do you think the industry is mature enough to selfregulate . No. So therefore it is going to be necessary for us to mandate a structure . Yes. I think a change of heart at open a. I. , initially they were harvesting data produced by individual conversations, and then more recently they said we are going to stop doing that. But clearly if you are a company, not considering personal conversations, but if you are in a company and you want and assistance to help you with internal operation, you are going to be divulging Company Proprietary information to the chatbot to get it to give you the answer you what. If that is available to your competitor, this would be terrible. So having a clear definition of what it is, the technical term is oblivious, basically whatever we talk about i am going to forget completely, right . That is the guarantee that they should offer. I believe that browsers and any other device that interacts with should offer that as a formal guarantee. Let me also make the point about enforcement. Which i think, senator hawley mentioned at the beginning, the right of action. For example, we have a federal as i understand it, it is a federal crime for a company to do robocalls to people on the federal do not call list. My estimate is there are hundreds of millions or possibly a trillion federal crimes happening every year. And we are not really enforcing anything so you would say existing law is not sufficient for a . Correct. And existing enforcement patterns. Let me move on. In tennessee, a. I. Its important. Our Auto Industry uses so many a. I. Applications. With all of this issue for quite a period of time, because of the Auto Industry, because of the Health Care Industry and the Health Care Technology industry that is headquartered in nashville. And of course, predictive diagnostics, these analysis, research, pharmaceutical research, benefits tremendously from a. I. And then you look at the Entertainment Industry and the voice cloning and you look at what our entertainers, our song writers, our our tests, our authors, our publisher, as our tv actors, our tv producers are facing with a. I. And to them its is an absolute way that they are robbing them of their ability to make a living off of their creative work. So our Creative Community has a different set of issues. Martin mcbride who is no stranger to Country Music went into spotify and the playlist is a big thing, building your own playlist. She was going to build her own Country Music playlist on spotify. She had to refresh that 13 times before a song by a female artist came up. 13 times. So you look at the power of a. I. Two shape what people are hearing. And in nashville, we like to say you can go on lower brought, go to one of the honkytonk, your bands can have a great night and you, two, could end up with a record deal. But if youve got these algorithmically a. I. Generated play lists that cuts out new artists or females or certain sounds, then you are limiting some of someones potential. Just as if you allow a. I. Generated content, like on jukebox, which open a. I. Is experimenting with, then you train it on that artist sound and there are songs to imitate them, then you are robbing them of the ability to be compensated. So how do we ensure that that Creative Community is still going to have a way to make a living without heaven a. I. Become a way to steal their creative talents . I think this is a very important issue. I think it also applies to some of whom are suing open a. I. And im not really an expert on copyright at all, but some of my colleagues are. And i think would be a great witness for a future hearing. And i think the view is, the law as it is written simply wasnt ready for this kind of thing to be popular. So if by accident the system produces a song that has the same melody, that is going to fall under existing law. You are basically plagiarizing, and there have been cases of human plagiarism we have explored fear use issue and this committee already, and will continue to do so. My time is expired. Thank you mister chairman. Well begin a second round of questions. I want to begin with one of the points that senator blackburn was making about rights of action. I think senator hawley and i have discussed incorporating into legislation in many instances. To be there a blunt, and shes become captive all the industries they are supposed to regulate. This one is too important to allow it to become captive and one very good check on the captivity of federal entities, agencies, or offices is infect private rights of action. So i would hope that you would endorse that idea, i recognize you are not lawyers, youre not antiprisoners of litigation, but im hoping that you would support that idea. I see nodding heads, for the record. Let me turn to, also to recap, the very important comments you all made about elections. Take action against deepfakes watermarks, some kind of disclosure. Without censorship, we dont want a ministry of truth. We want to preserve civil rights and liberties and free speech, those rights are fundamental to our democracy. But the kinds of manipulation that can take place in an election including interfering with vote counts, mr. Action to Election Officials about what is happening presents a very dangerous specter. Superhuman a. I. , superhuman a. I. I think all of you agree, we are not decades away. We are perhaps just a couple of years away. And you describe it, well all of you do, in terms of the biologic effect, the development of viruses, pandemics, toxic chemicals. But superhuman a. I. , for me, is the most Artificial Intelligence that on its own could develop a pandemic virus. On its own decide joe biden should not be our next president. On its own decide that the water supply of washington, d. C. , could be contaminated with some kind of chemical and have knowledge to do its through public utility systems. And i think that our cues for the urgency, and these are not Science Fiction anymore, you describe them in your testimony so i think, your warning to us has really graphic content and it ought to give us urgency to develop and entity that cannot only establish standards and roll, but also research on Counter Measures that detect those misdirections. Whether they are the result of malign actors or mistakes by a. I. , or maligned operations of a. I. Itself. Do you think those countermeasures are within outrage as human beings, and is that a function for an entity like this one to develop . I think this is one of the core things. Whether it is the by a risks from models. I stated in testimony are likely to come in 2 to 3 years. Or the autonomous models which might pay at little bit more than that. I think the idea of being able to measure that the risk is there is really a critical thing. If we cant measure, then we can put in place all of these regulatory apparatus, but it will all be a rubber stamp. Funding for the measurement apparatus and the enforcement apparatus, working in concert is really going to be central, here. Our suggestion was, the national a. I. Research clouds that can allow a wider range of researchers to study these risks and develop countermeasures. That seems like a very, very important measure. Im worried about our ability to do this in time, but we have to try. We have to put the effort in. I completely agree. About the timeline as i wrote and much estimate, it can be a few years but it could also be a couple of decades because research is impossible to predict. But if we follow the terrains regulation, liability, that will help a lot. My calculation is we can reduce the probability of a rogue i showing up by a factor of 100 with regulations, it is really worth it. But it is not going to bring it to zero. Especially with bad actors that dont follow the rules anyway. So we need that investment in countermeasures and a. I. Is going to help us with that. But we have to do it carefully so that we dont create the problem we are trying to sow vin the first place. Another aspect of this, it is not just a. I. But it needs to bring expertise in National Security in bioweapons, chemical weapons, and a. I. People altogether. The organization is going to do that, my opinion it shouldnt be for profit. We shouldnt mix the objective of making money, which you know makes a lot of sense, with the objective. We should be singlemindedly defendant humanity against the potential of a. I. Also, we should be very careful to do this with our allies in the world, and not do it alone. There is, first we can have a diverse set of approaches, because we dont know really how to do this. We are hoping that as we move forward, and we try to solve the problem, well find solutions. But we need a diversity of approaches and we also need some kind of robustness against the possibility that one of the governments involved in this kind of research it isnt democratic anymore, for some reason. That could happen. We dont want a country that was democratic and has power over a superhuman a. I. To be the only country working on it. We need a resilient system of partners, so that if one of them ins up being a bad actor, the others are still there. Thank you very much. Professor russell, if you have a comment . I completely agree that if there is a body, that it should be enabled to fund and coordinate this type of research. I completely agree with the other witnesses but we havent selfthe problem yet. I think there are a number of approaches that are promising, it tends to worthy approaches the provide mathematical guarantees rather than just guarantees. Weve seen that in the nuclear era. Originally, the extended, i believe, once you could have a major core accident every 10,000 years and you had to demonstrate that your System Design met that requirement, and that it was 1 million years, now its 10 million euros. So that is progress. And that actually comes from heaven aerial Scientific Understanding of the material, the designs, redundancies, its that era. We are just in the infant stages of our current understandings of the a. I. Systems we are building. I would also say that no Government Agency is going to be able to match the resources that are going into the creation of these a. I. Systems. The numbers i am seeing roughly ten billion dollars a month going into agi start ups. Just for comparison, many that is about ten times the amount of the entire National Foundation of the United States. That covers physics, chemistry, biology, its literate, its that era. How do we get that directed towards safety . I actually believe that the involuntary recall provisions that i mentioned would have that affects. If a company put out a system that violates one of the rules and then is recalls until the company can demonstrate it would never do that again, then the company can go out of business. It may have a very strong incentive to actually understand how the system can work. And if they cant, to redesign the system that just seems like basic common sense to me. I also want to mention on rogue i mean. The bad actors. Miss professor bengio has mentioned an approach based on may a. I. Systems that are developed to try and counteract many. But i also feel we may end up needing a very different kind of Digital Ecosystem in general. What do i mean by that . Right now, at a rough approximation a computer runs a piece of binary code that you load into it. We put layers on top of that, okay, that looks like a virus, im not running that. We actually need to go the other way around. The system should not run any piece of binary code unless it can prove to itself that it is a safe piece of coach to run. So it is sort of a flippant the notion. So, we could actually have a chance of preventing for the circumvention of these controls. For them to develop their own have resources into the tens or hundreds of billions of dollars and that is an approach i would recommend. I have more questions but im going to turn to senator hawley. Lets put a little bit about National Security and a. I. Im going to come back to you, you mention in your written testimony and your policy recommendations. Your first recommendation is that the u. S. Must secure the a. I. Supply chain. You mentioned immediately, as an example, the chips used for training a. I. Systems. We are our most of the chips made now . Your microphone, maybe. People are eager to hear what you have to say. What i have in mind here, there are certain bottlenecks in production. From semiconductors to chips. Systems which have to be stored in a server somewhere. In theory they could be stolen or used in an uncontrolled way. Compared to some software elements, those are where there are substantially more bottlenecks. Understood, weve heard a lot about chips, see pete hughes, the shortage of them. Do you know where most of them are currently manufactured . There are a number of steps in the production process for ships. You produced the role of ships for the actual cpu. Those have been in a number of places. For example. An important player on the making up the base fabrication side which would be and taiwan. And Companies Like that in the usa. They are producing cpus. I dont know exactly where that process happens, it could be in a large number of places. As part of our securing our supply chain in this area, should we consider limitations, if not outright prohibitions on component that are manufactured in china . On that particular issue, that is not one or i have a huge amount of knowledge. I think we could think a little bit in the other direction of, are things that our produced by our supply chains, do they end up in places we dont want them to be . We worry about that a lot in the context of models. You have spent a lot of dollars to train and a system, and you get one some state actor or criminal or broke to steal that and use it in and a responsible way that you dont endorse. Let me get into this problem from iceland of an angle. Lets imagine a hypothetical in which the communist government of patient decides to launch an invasion of taiwan. Lets imagine, sadly it doesnt take much imagination. Let us imagine they are successful in doing so. Give me a pick of the envelope forecast, what might that do to a. I. Production . Im not an economist. It is hard to forecast. But if youre large fraction of the chips are indeed, some would go through the supply chain in taiwan. There is no doubt that is a hot spot and something we should be concerned about, for sure. To either of the other panelists want to say anything about this . Professor russell, perhaps . There are studies. My colleague orville shell, a china expert, has been working on these issues. There are already plans to diversify away from taiwan. Tsmnc is trying to create a plant in the u. S. And also in germany. But it is taking time. I think the inflation that you mentioned happened tomorrow, we would be in a huge amount of trouble. As far as i understand it, there are plans to sabotage all the tsmc operations in taiwan if an invasion were to take place. But all of that capacity it would be taken over by china. What is said about that scenario is that would be the bestcase scenario, . Rights if there is an invasion of taiwan, maybe all of the capacity or most of it gets sabotaged and that would all be in the dark for however long. The point im trying to make, supply chains are absolutely critical. Im thinking very seriously about the coupling efforts. I think it is vital at every point of the supply chain we can. If we dont do that with china soon, frankly we should have done it a long time ago, if we dont do it very, very quickly, i think were in at lot of trouble. Weve got to think seriously about what may happen in the event of a taiwan invasion. I want to empathize with professor Russells Point even more strongly. We are trying to move some of the chip production capabilities to the u. S. But we are talking about 2 to 3 years for some of these theories scary applications. And maybe not much longer than that for truly autonomous a. I. Collect me if im wrong, but i think the timelines for moving these production facilities look more like five years, seven years, and with only started on a small component of them. So just to emphasize, i think its absolutely essential. Let me ask you about the different issue related to labor overseas and labor exploitation. The wall street journal published a piece today entitled cleaning up chatgpt takes half a toll on human workers. Contractors in kenya set that were traumatized by the efforts to clean up violent and six youll abuse the article details widespread use of labor in kenya to do this training work on the chatgpt model. I would encourage everyone to read it and i would ask the chairman to be able to enter this onto the records. One of the disturbing a couple of disturbing things. One, we are talking about 1000 or more workers overseas. Exportation of those workers. They work around the clock. The material they are exposed to is incredible and im sure extremely damaging. That constitutes lawsuits they are networking. Heres another interesting tidbit. The workers on the project were paid an average of between 1. 46 an hour and 3. 74 an hour. Let me say that again. The workers on the project were paid on average between 1. 46 an hour and 3. 74 and our. Open a ices, all, we thought they were being paid over 12 an hour. So we have a classic corporate outsource maneuver or accompany outsources jobs, could have been in the u. S. , outsources jobs, exploits workers to do it and then says i dont know anything about it. We are asking them to engage in the psychologically harmful activity, or probably overworking them to, and for not paying them. Oops. I guess, my question is how widespread is this in the a. I. Industry. Because it strikes me, we are told that a. I. Is new and its a whole new type of industry and its glittery and magical, yet it looks like it depends in critical respect and very oldfashioned, disgusting, immoral labor exploitation . This is one area where has a different approach from what youve described. I cant speak from what other companies are doing. A couple of points. This constitutional a. I. I mentions is using one a. I. To train another a. I. System. It does not eliminate, but its a potentially reduces the need for the human labor you are describing. Second, in our own contracting practices, and i would have to talk to you directly about numbers, but i believe the companies we contract alex who are Something Like northwards of 75 workers from the u. S. And canada and all paid about the california minimum wage. So i share your concerns about these issues and we are committed to both developing research that kind of obviates the need for some of this kind of moderation and not explicitlys workers. Thats good. What would be terrible to see it would be this new technology that is built by foreign workers, not American Workers, that seems like the same old story weve heard for 40 years in this country. We are told American Workers cost too much. American workers are too demanding. American workers dont have the skills. So were going to outsource it. We are going to get to foreign workers. That you mystery to be foreign workers. Then you dont pay foreign workers. And then who benefits from it at the end of the day . These few companies we talked about earlier who make all the profit and control all of it. That seems like an old, old story that i frankly dont want to see replicas it again. That seems like a dystopia, not like a new future. So i think it is critical we find out what the labor practices are all of these companies. Im glad that you are turning a different course. We want to hold you to that. I think it is vital that as we continue to look at Healthy Technology is developing, that the actually push for what is wrong with heaven and to naji that actually employs people in the usa and paste him well . Why shouldnt American Workers and American Families, protected by our little lowers, benefit from this technology . I dont think thats too much to ask. I think we ought to expect that of companies in this country who are with access to our markets work on technology. Thank you. I dont think youll find much disagreement. With that proposition. To have American Workers to those jobs, you need to train them. Correct . And you all, in some sense, because you are all teachers, professors, you are engaged in that enterprise. Mr. Amodei, i dont know whether you can still be called a professor, probably not. It was never a professor. But we need to train workers to do these jobs and for those who want to pause, and some of the experts have written that we should pause a. I. Development, i dont think its going to happen. We, right, now have a gold rush. Lets really much like the gold rush that we had in the wild west, where, in fact there are no rules and everybody is trying to get to the gold without many law enforcers out there preventing the kinds of crimes that can occur. So i am totally in agreement with senator hawley in focusing on keeping its made in america when were talking about a. Im thinking it is absolutely right, we need to provide that kind of structure and training and incentive and enable it and informs it. Let me come back to this issue of National Security. Who are our competitors among our adversaries and our allies . Is it china . Our other advisories out there that could be rogue nations, not just rogue actors, but rogue nations . And who do we need to bring into an International Body of cooperation . I think the closest competitor we have is probably the uk. In terms of making advancements, both in terms of academia and in particular, based in london. Nope emerged more forcefully into the larger organizations. But they have a very distinct approach. And theyve created an ecosystem in the uk that is really quite productive. Ive spent a fair amount of time and china. I was there a month ago talking to the major institutions that are working on agi. And my sense is you have slightly overstated the level of threat that they currently present. Theyve mostly been building copycat systems. They turn out not to be nearly as good as systems that are coming out from anthropic ands openai and google. So the intent is definitely there. They have publicly said that theyre going to be the world leader. And they are investing, probably larger sums of money than we are, in the u. S. All the sums and the private sector. The areas where they are actually most effective, and i was actually on a panel in tianjin for the top 50 chinese startups. They were giving out awards. But i think about 40 of those 50 there, their primary customer was state security. So they are extremely good at voice recognition, face recognition, tracking and recognition of humans based on gait and similar capabilities that are useful for state security. Other areas like planning, there are just really not that close. They have a pretty good Academic Center that there are in the process of ruining by forcing them to meet numerical publication targets, things like that. They dont give people the freedom to think hard about the most important problems and they are not producing the basics that we have seen both in the academic and the private sector in the u. S. It is hard to produce a superhuman thing machine if you dont allow humans to think. I also looked a lot at european countries. I am working with the french government quite a bit. I dont think anywhere else is in the same league as those three. [inaudible] [silence] [silence] that would capture pretty much both of the expertise and a very strong eye i system that could be important here. And that will probably be some way for our entity, our National Oversight body doing licensing and registration to still cooperate. In fact, i would get one of the reasons to have it work and collaborate with other countries. There is no doubt individual countries have their own National Security organizations and are going to do their own laws. But the more we can coordinate on this, the better. Some of that research should be classified and not shared with anyone. So there are aspects of what we have to do that have to be really broad at an international level, and i think that guidelines, maybe mandatory rules for safety should be something we do internationally. Back with the u. N. We want every country to follow these rules. Because even if they dont have the technology, some rogue actors even here in the u. S. Might go and do it somewhere else. Then viruses, computer all part logical viruses so we need to make sure there is an International Effort in terms of some of his Safety Measures. We need to agree with china on these Safety Measures. And we need to work with our allies on these countermeasures. I think all those observations are extremely timely and important. And on the issue of safety, i know thats anthropic has developed a model card for law that, essentially, involves evaluation capabilities the risk of self replication, or a similar kind of danger. Openai engaged in the same kind of testing. Weve been talking about testing hoarding. So apparently you have short the concern thats systems make it out of control. Professor russell recommended and option to terminate and a. I. System. Microsoft called this requirement safety grapes. When we talk about legislation, would you recommend that we impose that kind of requirements as a condition for testing and auditing the evaluations going on when deploying certain a. I. . Up firstly, focusing on risk, i think everybody has talked about systems that are vulnerable to risks. Systems and a. I. Models spreading like a virus in a Science Fiction, but the safety breaks could be very, very important. Do you agree . Yes, i, for one, think that makes a lot of sense. The way i think about it, in the testing and auditing regime, with all discuss the best case is if all of these dangerous that were talking about dont have weve been tests to detect dangerous. There is basically prior restraints. If these things are concerns for Public Safety and National Security, would never want to bedminster happen in the first place. But precisely because we are still getting good at the science of measurement, probably it will happen. At least once. Unfortunately, perhaps repeatedly, we were in these tests, we think things are safe, then they turn out not to be safe. I agree. We also need a mechanism for recalling things. Modifying things. So that seems like common sense to me, sure. I think there has been some talk about all too gpt. Maybe you can talk about how that relates. Autogpt refers to certain systems, not designed to be chatbots but commandeering such systems for taking actions on the internet. To be honest, such systems are not particularly effective at that yet but they may be in the future. The kinds of things we are worried about in the future, the long term risks i described in the short, medium, and long term risks. So i dont as of yet see a particular amount of danger from things like the system youve described, but it tells us where were going and where we are going is quite concerning. And some of the areas that have been mentioned, like medicines and transportation, there are public reporting requirements. For example, when theres a failure, the faa system has an active incident report. They collect data about failures. It serves as a warning to consumers, create deterrence for putting unsafe products on the markets. And over science of Public Safety. Weve discussed this afternoon both short term and long term types of risks that can cause very significant public harm. It doesnt seem like a. I. Companies have an obligation to report issues right now. There is no place to report it, they have no obligation to make it known, if they discover the oh my god, how did that happen, incidents it can be entirely undisclosed. Would you favor some kind of requirement for that kind of reporting . Absolutely. And it may be obvious, but let me ask all of you. I see again your heads nodding for the records. Would that inhibit creativity or innovation to have that kind of requirements . I dont think, i mean, there are many areas where there are important tradeoffs. I dont think this is one of them. I think requirements makes sense. To give a little of our experience in biological harms, weve had to work on pilots in a responsible disclosure process. I think thats lesson about reporting to the public, more about making other companies aware. The two things are similar to each other. A lot of this is being done on voluntary terms. You see some of it coming up. The commitments that these Different Companies make. I think there is a lot of legal and process infrastructure that is missing here and should be filled in. I think, to go along with the notion of an involuntary recall, there had to be their approaching step. You mentioned recall. Both senator hawley and i or state tierney generals before we got this job. Both of us are familiar with consumer issues. One of the reservations for me always was, even with a recall, a lot of consumers didnt do anything about its. So i think the result is a concept its a good one, but it has to be there has to be a cop on the basics. A cop on the a. I. The. I think the Enforcement Powers here are tremendously enforcements the point that you made about the tremendous amount of money is very important. Right now, it is all private funding. Or mostly private funding. Bullets the government has an obligation to invest, i think you would agree, just as it has in other technology and innovation because you cant rely on private companies to police themselves. That is comps on the beat in the a. I. Context. Incentivizing innovation. Funding it. To provide the airbags and seatbelts and crash proof kinds of Safety Measures that we have in the automobile industry. I recognize that the ethnology is imperfect, but i think the concept is there. Senator hawley . This has been tremendously helpful. I want to thank you again for taking the time to be here. I want to ask you, if you could give us your one or at most two recommendations for what you think congress ought to do right now, what should we do right now based on your expertise, from what weve talked about today . I would be very, very curious. Maybe well start with professor russell and go that way. I gave some recommendations in my opening remarks. There is no doubt were going to have to have an agency. If things go as suspected, a. I. Its going to end up being responsible for the majority of economic outputs in the United States. It cant be the case that there is no regulatory authority. The second thing, focus again on systems that violate or commit unacceptable behaviors should be removed from the market. That will protect American People and our National Security, but stimulating a great deal of research on ensuring a eyes out well understood, profitable and controllable. Professor bengio. What i suggest in addition to what professor russell said, is to make sure either through incentives to companies, but also direct investment and nonprofit organizations that we invest heavily, or as much as we spend on king more capable a. I. , whether it is safety, hardware, cybersecurity, and National Security to protect the public. Very good. Mr. Amodei . I would stress the testing and auditing regime. Two or three years to the risks of autonomous replication, some unspecified period after that. All of those can be touch different kinds of tests that we can run and our model. That can be a scaffolding. If we start protesting for only one thing, we can in the end test for a much wider range of concerns. And i think without such testing, we are blind. If i give you an a. I. System. You talk to its. It is not difficult to determine whether it is a safe system or an interest system. We are making these machines, like cars and airplanes, these are complex machines. We need an enforcement mechanism and people who are able to look at these machines and say, one what are the benefits of these and what is the danger of this particular machine as well as machines in general . Once we measure that, i feel it is all going to work out well. But before we have identified and have a process for this work, from a regulatory perspective, we are in the dark. The final thing i would emphasize, i dont think we have a lot of time. I personally am open to whatever administrative mechanisms to put those kinds of tests in place. Im three agnostic whether its a new agency or existing agencies. But whatever we do, it has to happen fast. And i think the focus needs to be on the bio risks, targeting in 2025, 2026, maybe even 2020 full. If we dont have been someplace to restrain what can be done with a. I. Systems, were going to have a really bad time. Thank you to each of you. Thats really helpful. Let me throw an idea out to you what i have you here, so to speak. We think about protecting individuals and their personal data and making sure that it doesnt end up being used to train one of these generative a. I. Systems without the individuals concerned. We know there is an enormous amount of personal information out there in public, kind of. It is out there on the web. Everything from a Credit Histories to social media posts, we et cetera, et cetera. Should we, in addition to a son and property rights, explicitly given every american a property right of their data, should we also be requiring commission if a. I. Companies want to use individual data and their model in some way . It is not always going to be possible to attribute the outputs of a system to a particular piece of data. These systems are not just copying, theyre integrating information from many, many sources. So we need other mechanisms to share to people who are using from but in some cases, it could be identified if and outputs it close enough to something that has copyright, or something. I think in that case, yes. We should do it. Any other thoughts . That is all of my questions. I have a couple more questions. I promise that will be brief. You have been three patients. You are such a great resource that i want to impose on your patience and your wisdom. The point that you are making earlier about feet right teaming and the importance of testing and auditing reminded me about your testimony, york prepared testimony but also confessions you and i had about how we anthropic went about testing its large language model, particularly as related to the biological danger where you worked with worldclass bio security experts, i think was your quote, over many months in order to be able to identify and mitigate the risks that clause two might raise. On the arthur hand, i think you may have mentioned a company that basically used crack and did the same task. Thats an enormous difference in those two testing regiments. Right now, there is no requirement, no legal duty, but would you recommend that when we write legislation that we impose some kind of qualification on the testers and evaluators that have the expertise . Spiritually, and three allied with that. All of us, all of the companies, ultra researchers are trying our best to throw this out. I dont want to go on a call outs any companies here. I think were all trying to figure it out together. I think it is a lesson that contestant piece models you can do something that you might think is a very reliable way of soliciting bad behavior for the models or a test that you think is truthful. And you can find out later that that really wasnt the case. Even if you have all the good content and the world, in the case of bio, he was to have world experts and zero in on a few things. And other areas, the key might be different. And so i think the most important thing might not be similar to the static requirements, although i certainly endorse a level of expertise has to be very high, but making the process have some living element to it so it can be adjusted. What used to think this test was okay, this test was not. Just imagine were a few years after the invention of line and were looking at these big machines, how do we know if this thing is going to crash . Right now we know very little. Somehow we need to divide the regulatory architecture is that we can get to the point where if we learn new things about what makes planes safe and what makes planes crash, they get kind of automatically built into whatever architecture we build. I dont know the best book to do that, but i think that should be the goal. Thats a very timely analogy, because a lot of the military aircraft were building now basically fly uncompetitive. And the politics in the plains right now, but were moving towards such sophisticated and complicated aircraft, which i know a little bit about because im on the Armed Services committee, that they are a lot smarter than the pilots in some of the flying they can do. But at the same time, they are certainly red seeming to avoid mr. Works and mistakes. The cons of specifics you mentioned outwork the rubber hits the road. These kinds of specifics are where legislation will be very important, President Biden has enlisted, or illicit commitments to security, safety, transparency, and he announced on friday an important step forward. But this red teaming is an example of how voluntary nonspecific commitments the advantages are in the details, not just the devils. The details are tremendously important. When it comes to economic pressures, companies can cut corners. Again, the gold rush. Decisions have real economic i want to just, in the last minute and a half, on the issue of open source, you each raised a security and safety risks of a. I. Models that are open source or are leaked to the public. There are dangers, there are advantages as well. Complicated issue, i appreciate that open source can be an extraordinary resource. But even in the short time weve had a. I. Tools, and have been available, they have been abused. For example, im aware a group of people took for the express purpose of creating non consensual sexual material. On the one hand, access to aeyed at it its a good thing for research, but on the other hand the same open models can create risks. Because they are open. Senator hawley and i to, give you an example of a corporation, wrote to meta about and a. I. Model that released to the public, you are familiar with it, im sure. They put the first version of it out there with not much consideration of risk and it was leaked or was somehow made known. The six version hit more documentation of its safety, but it seems like metal facebooks Business Decisions may have been driving its agenda. So let me ask you about that phenomenon. I think you have commented on it, dr. Bengio, so let me come to you first. I think it is really important. When you put open source out there, for something that could be dangerous, which is a tiny minority of what we call open source, essentially we are opening veto to bad actors. As the systems become more capable, bad actors dont need to have strong expertise. Whether it isnt bioweapons or to take defend the of things like this. They dont even need to have huge amounts of computers either to take advantage of systems. Now i believe that a Different Companies that have committed to these measures last week probably have a different interpretation of what is a dangerous system. I think that this really important that the government comes up with some definition which is going to keep moving but make sure that the future releases are going to be very carefully evaluated for that potential before they are released. Ive been a staunch advocate of open source my entire career. Open source is great for signs of progress. But as my colleague was saying, if or software, would you allow open source Nuclear Bombs . And i think the comparison is apt. Ive been reading the most recent biography of robert oppenheimer. Every time i think about a. I. , the specter of quantum physics, Nuclear Bombs, also Atomic Energy for peaceful and military purposes is inescapable. I have another thing to add on open source. Some of it was coming from Companies Like meta, but there is also a lot of interest coming out of universities. Usually, universities dont have the means of training the we are seeing an industry. But the code could then be used by a rich bad actor and turned into something dangerous. So i believe that we need ethics review boards in universities for a. I. , just like we have for biology and medicine. Right now there is no such thing. There are ethics principles, they could do that, but they are not set up for it. They dont have expertise or protocol. We need to move into a culture where universities across the world adopt these ethics reviews with the same principles we are using for other kinds of dangerous output, but in the case of a. I. I strongly shear professor bengios view here. I want to make sure im precise and my views. There is nuance to it. Im aligned with professor bengio and most scientific fields, open source as a good thing and it can accelerate progress. I think even with an a i, there is room for models on the small and medium size. I dont think anyone believes those models are separate interest. They have some risk, but the benefits met with a cost. And i think, to be, fair even up to the level of open source model that have been released so far, the risks are relatively limited. Im not sure i have an objection. But im very concerned about where things. If we talk about 2 to 3 years, the frontier model for the buyer risks, and probably less than that for things like misinformation, we are there now. I think the path of things are going in terms of the scaling of open service models, i think it is going on every dangerous path and, again, if the path continues i think its worth saying some things on open source models that i think is understood by this committee. When you control a model you are deploying, youve got to have the ability to monitor its usage. You can revoke eight years axios, change what the model is going to do. When a model is uncontrolled, there is no ability to do that. It is entirely out of your hands. So i think that should be attended to carefully. There might be ways to so it is harder to circumvent the cart foils. But that was a much harder problem and we should confront the advocates with that problem and challenge them to solve it. Open source is a bit of a misnomer. Opens no minute refers to smoke developers who are iterating quickly. I think that is a good thing. But i think here we are talking about something a little bit different, which is a more uncontrolled release of larger models by, to your point, much larger entities with tens or hundreds of millions of dollars to train them. I think we should think of that in a little bit of a different category. Id like to add a couple of points. I agree with everything the other witnesses have said. One issue is being able to trace the provenance of, from the output that is problematic through to which model we have used to created, through to whether, where did that model come from. The second point is about liability. Its not clear where the liability should lie but to continue the nuclear analogy, if a corporation decided they wanted to sell a load of enriched uranium and supermarkets and someone decided to take that enrich uranium and buy several pounds of it and make a bomb, should some liability reside with the companies who sold the enriched arena . Think about a sign on it saying dont use more than three ounces of this in one place, but nobody is going to say that save them from liability. So i think those two are really important and the open Source Community has got to start thinking about whether they should be liable for putting king jr. Stuff out there that is ripe for misuse. I want to invite any of you who have closing comments or thoughts that you havent had an opportunity to express. I would like to add a point about international or multinational collaboration on these things. How its related to having maybe a Single Agency in the United States. If there are ten different agencies trying to regulate in its various forms, that could be useful. But as to what russell was saying, this is going to be very big in terms of the space it takes in the economy but also we need to have a single voice that coordinates with the other country. And having one agency that does that is going to be very important. Also we need an agency in the first place because we cant put into law every protection that is needed, every regulation that is needed. We dont know yet what the regulation is gonna be in one year, two years. We know something thats going to be agile. I know its difficult for governments to do that. Maybe we can research to improve on that proud front, ability and doing the right thing. But having an agency is a tool toward that goal. I would quote by saying that is exactly why we are here today, to develop an entity or a body that will be agile, nimble, and fast, because we have no time to waste. I dont know who the promises is on a. I. But i know we have a lot of work to make sure that the fire here is used productively. There are a normally productive uses we havent really talked about them much. Whether it is curing cancer, treating diseases, some of the mundane, like xrays, or developing new technology that can help stop climate change. There are a vast variety of potentially productive uses, and it should be done with American Workers, i think. Very much in agreement here. And the last point i would make on agreement, what you have seen here is not all that common, which is bipartisan unanimity that we need guidance from the federal government. We cant depend on private industry. We cant depend on academia. The federal government has a role that is not only reactive and regulatory, its also proactive in investing, research, and development of the tools needed to make this firework for all of us. So i want to thank every one of you for being here today. We look forward to continuing this conversation with you. Our record is going to remain open for two weeks in case any of my colleagues have written questions for you. I may have some as well. If you have additional thoughts, feel free to submit them. I have read a number of your writings and i am sure i will continue reading them and i look forward to talking again. With that, this hearing is adjourned. Next, governors of south dakota and mark of wyoming testify on all School Related conservation and Landscape Health designated by the bureau of land management. They discuss states rights and

© 2025 Vimarsana

comparemela.com © 2020. All Rights Reserved.