comparemela.com

Card image cap

The real risks and real risks and opportunities related to ai. I frame it in those terms because for the last six months we had a very exciting ai debate. Dominated by extreme voices around extreme fears or frankly unrealistic utopian visions about what ai can do for us and there is really important middle in between. Those are the nuts and bolts i would like us all to dive into. If there is anyone who feels they have not been keeping up to speed in the debate, do not worry, this technology is transformative and it will be here forever. You have not missed much. Number two, everyone firmly believed that because this will transform all of our lives, we have all arrived in a state with a voice in this discussion. I am going to dive in with you. You have said that the most important first step in dealing with ais understanding and managing the risks. Not everyone agrees. Sam altman, he wasnt sitting there consulting what has blueprint for ai when he released his chatgpt product into the world. Here is what the white house would you to share this revolution. Thank you, ryan. Amazing to be here with you. This topic of ai is an active, urgent area of work at the white house. I appreciate the chance to kick off this discussion. President biden has been very clear and many of you must have heard him talk about how we are at an Inflection Point in history. He very much talked about ai in that context as one of the most powerful voices today. And the choices we make today including about ai are going to change the ark of the next few decades. That is why ai is such a high priority in the work we are doing and our work starts by recognizing the phenomenal breadth of this technology as the most powerful Technological Force of our time. We all know what Human History tells us about what happens with powerful technologies. We know that humans will use them for good and for ill so the approach we have taken from the white house on our Artificial Intelligence is we absolutely want to seize its benefits by the way to do that is we have to start by managing its risks. Because ai is so broad, its applications are vast so i will briefly give you four categories of risk that we think about. We need to untangle this. I am sure you have for the cacophonous talks about ai. The second is the broad category of risks to safety and security. Everything from selfdriving car to cybersecurity and biox by a security concerns. The third is the risk to Civil Liberties including issues that can be embedded in biorhythms. And risks to jobs in our economy. That starts to give you a sense of how incredibly broad this challenges with ai. What you will see from us is ongoing work the week i arrived to join the white house in october of last year when we release the ai bill of rights, i think when you are in choppy waters with ai moving as rapidly as it is, there is no more important time to be clear about your values. That is the important foundation. He will continue to see many actions. Today we are working closely with the leading ai companies that make helping them step up to the responsibility we are working across agencies and government and everything we can do through executive action to get ai on a good track. We will definitely continue to work with congress on a bipartisan basis and all of that as they start laying out the legislative agenda. And then finally, we are working with our International Allies and partners. You will see all of those lines of support. I want to step back and say we know we are in a time when every nation in the world is trying to use ai to shape the future there reflects their own core values. We can all disagree about many things but one of the things i know we agree upon is that none of us wants to live in a future that is driven by technology shaped by authoritarian regime. That is why at this moment in time, American Leadership in the world depends on American Leadership in ai. I think that is what we will keep our eyes on as we do our work. Speaking of values, one American Value is opportunity. On the other hand, google is trying to not rush products at the door. How do you walk the tightrope . How do you be a partner to the white has bus to make sure you are innovating and not missing out on those opportunities . We talked about the notion of innovating boldly and responsibly in doing that together. Doing that in a way that is inclusive and brings out a lot of different views. That is challenging. While minimizing the likelihood that it is misused. For us, that breaks down to three categories. Many of them paralleling what they were talking about. This will accelerate areas like quantum but also things that make a difference in peoples lives. , precision agriculture, many more. Many people in Computer Science have never seen anything like this in their careers. But has to be balanced with a response ability agenda and many of the comets that were made before go exactly to this. Making sure that we get fairness right. We have had a federal fairness program at google since 114. Have we are thinking about the ways ai will change the future of work . Are we making sure we are staying grounded and factual in this information can be challenging . You heard about submachine hallucinations. That is a big Research Agenda we are working on comprehensive leave. Goal alignment, safety, many other areas. But also, security, we have to think about the challenges for cybersecurity but also potentially the advances in cybersecurity. This draws on single trust computing but also adds to threat intelligence. And now the notion of red teaming and adversarial review that we started to work on throughout the industry. How do we make sure we are finetuning these models in a way that minimizes the harms and maximizes the benefits. Because if i just do a follow up to you there, not all ai models are created equal. There are Different Levels of risk and use cases. People have been afraid about letting some of these things out into the wild. I dont to get into two technical debate but when you release this, anybody can use it without any restrictions, that can get used in a lot of different ways. How worried are you about some of those . And i think you are on an incredibly important point. A few months ago we would have said that progress in ai is purely dependent on more crunch time and because of the proliferation. When i was a venture capitalist i would have said it is democratizing the technology today. When i was in the Defense Department i would say it is proliferating and both of things are true. We want ai to be safe and effective before we the horse has to be effective irrespective of the proprietary model that people can use. I think we should be clear that we actually dont have tools to know when something is safe and effective. By definition, not safe and effective. We cant know. That is the work we have to do. All the questions go to rc because he has the impossible job of figuring this out. To those of you who are not figuring this out, this cost 100 million or more to train. It is a huge amount to computer. Those are as you might be able to regulate because they attached to Big Companies we know. There is a proliferation of smaller models. Some open source, not open source, figuring out how to ever create the standards, is that fair . I think that is the dynamic landscape we are in. The notion of casebycase, they will be critical. It is very hard to have a general purpose. But we do have decades, centuries of experience, transportation, etc. As we start to finetune those specific use cases, we can develop benchmarks for evaluation, draw on regulatory expertise and get to a better outcome. That is more finegrained and more nuanced. Why do so many people seem to be so afraid of ai and how do we go about creating the Building Blocks of trust . It is not going away. We have to find some way that we have ai we can trust. People say there are threats out there. You can see Arnold Schwarzenegger coming down and wreaking havoc and so when we look at it, the problem is ai has been around a long time. There is no one putting this new technology in context. A gash every time you say hey google, or alexa, you are using in ai kind of mechanism. You are using an assistant, a digital assistant that will help you get information, set your alarm, do whatever youre going to do. We have been told there is some this is some new emerging scary Jurassic Park Type Technology as opposed to an iteration of what we have been doing in the past. Yes, there are new threats. It is not as scary as what people think. I think the media has driven this narrative of saying boo every time you say ai. Be very afraid, it is going to get you. Is going to take away your job, it will do this and we know over time, looking at this, when you look at technology, innovation, we actually create jobs with innovation. There are different jobs. There is a need to have a transformative policy that deals with this. That is what is happening. When you look at the sense, the knowledge people have, they dont know what this is. And when you dont know what it is and you are told is going to take your job and you are told it is going to it will make your baby have two heads, you worry because if not, you dont have a context to judge the honesty. I just want to Say Something about my concern. I will tell you, i talked with us senator recently. Their staff put together something of him saying he would never say, does that energize politicians to get involved in the ai debate . You Better Believe it. So now they are in this process of trying to understand how that can en, how you protect against it and what you do about it. So i think there is some really smart kind of directional things that are being done in the senate, being done in the house, being done by the industry. I think we are moving in the right direction but we have to ratchet down the rhetoric about being afraid and amp up the rhetoric about how exciting this technology is for human civilization. What you think of senator schumers plan . He announced on monday that he wants to do a series of ai inside forms to get senators up to speed so they are more informed before they regulate. Is that a model that can work in Broader Society . The point is this is an extremely complicated transformational part of our lives. Do we need ai inside town halls all across the country . I think there is a range of ability to understand this, probably a range of ability as well but a range of current understanding of the technology. I think one of the really good news pieces of this is it is bipartisan. The concern and hesitation that we need to do something but also the hesitation to step back and say i dont know everything i need to know to make the right decisions, they are stepping back. They are going to analyze this. That is the problem is typically what you would see in washington would be a big food fight over which committee had jurisdiction whether it is congress, judiciary, treasury. I think what chuck is trying to do is get out ahead of jurisdictional fights and say this belongs to all of us. Lets elevate our understanding so we can be rational and have reasonable debate about the level of regulation. I think you all feel comforted by the bipartisanship but also the measured approach the congress is taking to advancing some kind of framework in which to regulate and assess ai. Two things that were bubbling up there were jurisdiction and knowhow. Are we going to end up needing a particular ai agency . Are we trying to build up knowhow across every agency in order to be able to deal with this . This is to recognize how strongly brought the applications of this technology are. To me it is not a workable model. Certainly in the work we are doing in the executive branch. There is not just one action that will come out right. You really have to understand it as a mosaic and look at all of it. I see that very much reflected in what the euro talked about on the hill. He has run two of them so far. One was a general briefing for senators to learn about the technology and then i was able to participate as one of five people who spoke to National Security. This was about 1. 5 weeks ago. We ended up covering a lot of territory. Not Just National security but the thing i really want to say is we have very bipartisan we have people from both sides of the aisle and it was the second time i had been with a large group of senators talking about ai. I have to share the quality of the questions that were being asked. It is on an upward slope. I think that learning process is underway. I think while we do what we are going to do from the executive branch, we very much want to maintain that Good Partnership and get to some good bipartisan solutions. That is virtually unheard of. Is unlike there was a lot of partnership going on. Im going to assume you would consider yourself a partner with u. S. Government, exploring ai territory. But not all partnerships are perfect and there has been a rough ride for big tech the last few years in washington. Is there something you could nominate that you Wish Congress or the executive branch were doing in this field to make it a more Productive Partnership . I appreciate that they are taking the time to get up to speed. There are broad eras here. Not just in the United States but internationally. Areas trying to provide more transparency. Figuring out what benchmarks make sense in specific areas. Having a riskbased approach. You are looking at very high risk sorts of applications. All of you have been using ai for many years if you use maps or gmail. I think most people would say there are very relatively low risk. This comes out of getting people in a room debating how the tradeoffs work. How we draw on important principles, privacy, nondiscrimination, openness, security. How do we get that right . That requires getting experts in the room. We have seen our range of ceos their willingness to embrace regulation. I have not seen them invite your team to look under the hood of their model. Is that something you would like . For someone to say, we will come and check out how this stuff is coded. I want to step back from that question. But you will have to answer it. [laughter] i think we use the phrase regulate and ai, we use them together. But we do not have a model of what we are talking about. Very broad applications, a lot of the harms we are concerned about are already illegal. When you use ai to commit fraud, to accelerate your ability to commit cyber crimes. There are already things that are not ok. There is an important issue, which is that laws exist, our ability to regulate and enforce as ai changes how people do that. That is the issue. A very important step in that direction, the Consumer Financial protection bureau, the ftc and the department of justice but artist statement reminding people that these things are still illegal and if you are using ai to do them, they will be enforcing against it. That is a grade of example of a step that is essential. We will need to do work. Keeping up with those concerns with that kind of accelerated malfeasance and being able to spot it when you see new forms of problems. And a scale issue we are not ready for. Those are things we are working on right now. Those are important actions that can start getting put in place. The question of what you do about Core Technology itself is what people want to talk about. That is not yet clear. Again, i want to keep coming back, heidi i love your point about terminator and Jurassic Park. We are living in a time in which there are a lot of sciencefiction conversations about ai, a lot of philosophy conversations. I sometimes feel like i am in a freshman dorm room at midnight. There are marketing conversations. All of those should go on. We are going to make sensible policies that change outcomes, the are going to stay anchored in what do human beings and corporations do, what does the human data that they are being trained on, how do humans and corporations decide to use this technology. That impact do they have in the real world . If you stay anchored in that and start working through and figuring out how we mitigate these risks, you get to practical solutions. That has to be the benchmark against which we way any regulatory. If that is part of it, it needs to be considered. The benchmark will always be, did we reduce bio security risks . Reduce the risk of misinformation . If i could jump in. In essence we are externalizing these models, we will be doing external training. Really sophisticated folks trying to break the systems. Which is a Collaborative Learning exercise. It is how do we collectively learn from it, what kind of attacks work well . This has to be a layered system of governance. You have to have Companies Taking responsibility, security by default and design, cross industry groups, to make sure what are the standards. Those can be faster and more nimble than what romance can come up with. An some cases provide a starting point. You are going to need forms of government regulation, and you will probably need International Frameworks to deal with security risks that have already started. It is all of the above. We have been talking about government regulation. There is a rule of law piece of this we have not been talking about. If you look at section 230 of the Communications Decency act, which says we will use these platforms, the systems we have created, they will be like Bulletin Boards. We will not sue the Bulletin Board for what is on the Bulletin Board. There has been a mammoth ability for growth to be free of any kind of interjection of civil liability. That is not true in my opinion in generative ai. A product is being created. You already see litigation around this, violation of copyright, taking a look at using my image inappropriately, using my data. One of the reasons why you want to look at regulation is, it can be of sword, but it is also a shield. Therefore we balanced all of these interests and will give certain levels of protection. When people talk about regulation, remember it is not just always a sword, it can be a shield from the other enforcement entity, which is called civil lition. I just want to double down on that. It is an important point. This is in its infancy. We were chatting, i dont think any of us had a clear view of a there be an fda institution . Probably not. Should every department within the u. S. Government be working on ai . Should there be legislation, law applied . This is so new that we have analogies for this stuff from different technologies, but the analogies are imperfect and we dont know how to apply them. I want to do a bridge between that point, how we internationalize or create global frameworks. Does the u. S. Have an advantage because it has not rushed to regulate . Perhaps by accident, we are in a good position to be flexible. In terms of the internationalizing, had the secretarygeneral of the u. N. Saying it has to be the u. N. That is the forum for a global body. You have the u. K. Government organizing and ai governments summit in october. I could give you many of them. How do we take advantage of the flexibility that the u. S. Has given, and amplify that to a global level . I would jump in quickly and say it should be for the best ai regulations, not the first. We have a little bit of time to get it right. We have a lot of great achievements to build on. It is a tramp that our ingenuity it is a triumph of the ingenuity that our system has created. But at the same time we have to balance that. We take six months or a year to figure out what combination of executive order, legislation, selfregulation, internationalization makes sense, that is probably a good down payment on the future. I will add, this will not be one and done. You will see waves of action. That is exactly what needs to happen. As a particular area gets big enough that we can do good regulation, when things ripen that will be when they happen. I think there are urgent, immediate actions that can be taken. For example, congress has considered privacy legislation and has gotten close to protect our kids. These are harms that are happening in the world today. The president has been your these are things we need to deal with now, before this next wave of ai. That would be a fantastic step and something we continue to work on. You sat in those discussions because privacy debates have been going on for years. How likely is it . I think there needs to be a sense of, why are we doing this . Why are we taking these steps to control privacy . Has there been an abuse of peoples data . Have we inappropriately used it . One of the reasons why you have not seen privacy legislation is, i do not think the voters and the public are demanding it. If you gave a list, someone going to door to door in pennsylvania, how many of you think they will say ai is my biggest concern . They will talk about gun violence, education, student loans, all of the things that affect their lives. When you look at privacy, and to talk to people on the street. I would challenge you all. I do this because i did a huge Privacy Initiative as it related to bank policy. If you ask them, i lost that years ago, i dont care. I am not doing anything wrong. This is not a point of voting. It is not what is going to motivate voters. Jobs are going to motivate voters, that is why you see a lot of attention to job displacement now that the Democratic Party feels like they missed a freetrade argument transporting a lot of jobs and will not make that mistake again. Lets not be the party that dislocate so many people that we get the blame. A lot of this is being driven, especially in an election cycle by what voters are talking about. They are talking about job insecurity. This is a great example of it is so much easier to deal with a tt staring you in the face, fictional or real, but the privacerosion that has haed is so counter to the fundamental liberties in our country. It has crept up on us and people have traded privacy for a lot of conveniences. We are at a point where it is driving addictive behavior online, it is linked to the polarization we are seeing, the Mental Health its use issues we are seeing. But i think we struggle to deal with it. I wouldge yn whether that is an access problem or privacy problem. Is that a privacy problem . You had your kids because they are crying, you hand them a tablet and say, entertain yourself. And you do not look over your shoulder. There shoulder. Going back to bite example of tobacco. When you look at the tobacco settlement, we knew how vulnerable our social platforms today to the challenge, the civil litigation challenge. But i do not see those as privacy issues nearly as much as i do utilization, access issues. I want to p plug in. I used to pay 20 a month for aol. That was in 1980. I loved it. I would pay it today. I get all of this stuff for free. And i am really happy. And if i get an ad for a blouse, i am ok with that. That might be a privacy violation, but i am ok. [laughter] if i got to use email, that is how the public looks at it. I will tolerate some of that invasion because i get goodies out of it and i am not paying school for what they give me. It is not just about to fiveyearolds getting access to tablets. Teenagers are in a Mental Health crisis that is partly fueled by this addictive behavior. We have to see how these things are linked. We can agree to disagree on that. The affects of lack of privacy differ according to where it comes from. There is not a lot of privacy when it comes to how ai is rolled out and used in china, that get used for nefarious purposes. You talked to a lot of chinese entrepreneurs, we heard of the Chinese Ambassador say that ai is an area for potential positive cooperation. How do you see the development of ai in china . I find the conversation around National Security and ai in washington is animated by what china is doing. Here is what i see on the ground in china. An independent agency that measures this stuff says for out of 10 companies in the world working on ai are chinese. A lot of the Cutting Edge Research papers on ai are coming out of china. We clearly lead the world in foundational models, the Large Language Models we are hearing about. But they are getting better. We were joking last night, a large language model is pretty good. Every month it is getting better. When i go to washington, there is a sense that we can export control our way out of this problem. I do not think that will work. We put harsh restrictions on semicotors place in october, which was right, some on ships, some on equipment. We double down on restrictions on nvidia chips. We have to do that. But lets not have illusions that will slow things down. We have to find a way to move faster. When i talked to chinese entrepreneurs, they are energized. The chinese have shot their own entrepreneurs, and now they cannot do anything. And the chinese are afraid of ai, they have already regulated. I do not see that. I see a lot more that the Chinese Government has created clear lanes for entrepreneurs, it is ok and not ok. And surveillance related ai is ok, visual recognition, object recognition, and they are excellent at those things. This notion of the global competition is a real one. And National Security is underpinned by economic security, productivity, we think these will be incredible tools for leadership. Countries that do this right, implement ai in ways that are trusted by the populace, making their workers more productive. In globalization, we were losing american jobs. This is an opportunity to make this. If we approach it with that lens, there is a lot of good investment we need to do to make sure we land a message that everybody can get behind. It is time to bring the audience into this discussion. I want to start with a question to all of you. Who among you think we collectively are the society collectively as a society have made ai understandable and representative of all of us . Who thinks we have done that . Who thinks their ego, no hands. We have work to do. There was a gentleman at the back. Can you hear this . I have eight grandchildren. I went to a disturbing lecture at the aspen institute, i am sure some of you were there. They explained how ai and Young Children, and children in particular, at 2 a. M. , there is nothing you can do, are getting ai friends. They explained they do not want google to release chatgp5. At this point, since they have voice recognition, i can get a call saying one of my eight children and it was only the child is speaking. There is too much that is not regulated that is important. I understand everything you are saying, everything about how it is going to this. However, if it takes so much away from the generations, and are doing things that are really very harmful, and there is no control. Because they are petitioning not to do chatgp5. It goes like this to this. You will not be able to control that. If you are not ahead of the things you are doing. I am extremely concerned about this. As you can tell from my speaking. A more proactive approach to regulation . You release the next thing, it will be so far ahead, that yes Young Children will do it. They will do everything with it. We will not know because i am not at 2 00 a. M. When my grandchildren are doing this. This has to be addressed completely. It is a problem. I bet you are not alone, any reactions on the stage . There is a whole section on deepfakes tomorrow. This is exactly right. I think the answer to the problem is in the technology. How do we build in safeguards and guardrails to minimize abuses . There are challenges about the proliferation of the tools and how do we strike the balance between open sourcing and keeping security. We need to make sure we get it right. I have heard the rosy descriptions of what is possible, and i hear what you are saying. This is one of the fundamental quandaries, any time you have a powerful technology, and you have to be able to keep bright and dark in your head and work towards both of them. Mitigating one and achieving the other. That is what we are struggling with. You will never supply side of your way out of this problem. This has to be regulated. You cannot count on the government to protect your children alone. That is not going to work. It has never worked before. You cannot count on the government to take every drug off the street, to make sure there are no risks out there for your kids. I get what you are saying and i know that keeps people up at night. But we have to have a partnership between families and this technology, and the technology could in fact lead to a cure that would cure cancer, childhood cancer. So we have to socially balance these things. I would go back to, we have to have responsible usage of the product that are being created. And one of the ways we have done this in the past is through civil litigation. Lets say you have a small startup, it is not going to be google. Someone who has a great product that they will want to deploy. I could go to historical examples. How do we control them . Where are they getting their money . What is the risk for the people who are giving them money . How do we create monetary risk to people who create dangerous products . That is not going to be done by regulation alone. It will be done in the roots of law that understand courts of law that understand. The gentleman in the yellow. I would like feedback on what lessons i would love to know the feedback on what lessons we can draw from section 230 as it applies to todays reality. When i look back, we wanted to encourage innovation. But neither self regulation or government regulation has prevented massive disinformation based massive disinformation and abuse. There are people that have hundreds of millions of followers that say things that if you are not in traditional media, you would be held accountable for. Google comes publish stuff, twitter can empower followers, and no one can do anything. There is a huge conflict of interest. Google monetizes false advertising. A couple days ago somebody used mind likeness to sell bills for gonis to lose weight. Guy means gummies to lose weight. They need to prevent false advertising. People are misusing fell likeness of others and making money off of them. I want him to have an opportunity to respond. Thank you for the question. We would argue the internet has been positive for the american economy, culture and people around the world. Part of that involves challenging misuses and abuses. The industry has gotten better. Eight years alone, one view in 100 violated policies. Now, it is one in 1000. There is a lot of good and learnings from that experience. It is odd the advances in ai, which are scientific, we tend to think of through the lens of social media. We could think them of them in the lens of scientific advances. We recognize the challenges, we have to stay on top of them, we are hoping ai will turbocharge those efforts. Do not lose sight of all of the benefits. Access to information, one billion people have come out of extreme poverty in the world in part because of the proliferation of technology and access to information. It is a hard ellens to strike, if we do it together, we think we can get it right. Having led the only amendment to 230, it is a challenge and partly because the courts have misinterpreted it. It allows illegal things to happen and it shields them. I would argue part of this is court misinterpretation. Can i end on a positive note . We have heard a lot of doom and gloom. There are real worries here. There are also amazing positive things that are coming out of this technology. I want to give a shout out to someone who has been working on this, he and many others happen pushing eight National Ai Research resource. Google, meta, openai, companies are racing ahead. But academics cannot keep up, they do not have the compute and resources that we can use for noncommercial things. I know a lot of people have in pushing for this. This is one of the many ways we can harness ai for good. I will finish where i started. This is a time for American Leadership in ai is essential to the way the future will unfold. You are lucky to have innovators in this country driving this technology. It is choices we need to make to navigate all of these issues. I hope everyone will stay engaged. You have been an extremely engaged audience. I think some of us will be interested to take questions when we come off the stage. We will cut it off there, im sorry i did not get to everybody

© 2024 Vimarsana

comparemela.com © 2020. All Rights Reserved.