comparemela.com

Card image cap

Information technology and innovation foundation, which has been ranked for the past several years as ontop technology think tank. Be here today to partner on this great workshop on this event in particular, this panel. Im joined by lynn parker, assistant director for Artificial Intelligence White House Office of science and technology policy. Next to her jason, general manager for Corporate Standards Group at microsoft. Finally Anthony Robbins, Vice President for north American Public sector at in nvidia. So we have a great group of public and private sector experts that can talk about activities in the Standard Development space from both the industry using the standards and the people helping define u. S. Leadership in ai standards. So before we get launched into the discussion, which will be about 45 minutes, then an opportunity for audience q a at the end, i want to do a little bit of stage setting here. So the term ai often used to describe two different but related topics. Theres technical standards, things like reliability, performance, accuracy, then theres the oversight of the ai systems. These are very different but related things. Standards are a prerequisite for oversight. The oversight side of this conversation receives disproportionately large share of attention. Potential for black box systems running amok without knowing whats going on has dominated conversations and Public Policymakers understandably want to a rece those concerns. Unfortunately this prioritization of oversight is coming at the expense of focus on Standards Development. The activities required to develop standards requires a robust Scientific Understanding that can serve as underpinning for this oversight. When we say things we want oversight for transparency, what some people are calling for. Right now algorithmic transparency doesnt have a definition. We dont know what that means, how to compare transparency of one system to another. Rushing to make rules about doing the scientific legwork behind it is going to be short sighteds and any rules will necessarily be arbitrary. I guess the challenge for us, how do we get nontechnical policymakers to care about this important work. Its a challenge. Im sure they know that. We hope to get out of this conversation today Going Forward, educate policymakers about importance of scientific legwork shaping future of oversight of ai, all the concerns that there is in the public about the potential misuse of systems, we need to make sure that translates into momentum for this kind of scientific investment. So to start off im going to tee up with an easy question for panelists to get the ball rolling. What is your primary focus when it comes to ai Standards Development. Its going down the line. Are you working on develop yourself or how do you engage with standards in the community, what do they mean to your business or federal role in the government. Good morning, thank you all for being here this is an important activity. My role at the wlous, assistant director for ai, which means i oversee white house activities in ai which promote ai. One of the important areas im working on now, all the deliverables called for in the executive order and you heard this morning main deliverable were discussing today, which is the creation of a plan for how the federal government should prioritize its engagement and Technical Standards Development for ai. So this is obviously since its one of the key actions in the executive order, its recognized by the administration as an area that we as a nation need to get engaged. There are a lot of good reasons for doing that. We can go into that as we go through the panel. But my role right now is looking at the great work doing leading the way here. Certainly rfi is an important way for stakeholders to provide feedback. We encourage you and your colleagues to provide that feedback. In workshop is an important opportunity to hear from everyone about what the federal government should be emphasizing. As the plan is developed and issued for public comment, we encourage you to respond to that and provide feedback. Im cheering on all the great work that they are doing lower and all that youre doing to contribute to that. Good morning. Its a pleasure to be here as well. Thank you for hosting and getting this important process understand way. At microsoft i am running a team of Global Practitioners who are involved in standardization at the International Level through primarily in relation to Artificial Intelligence jtc 1 and well be talking more about that throughout the day. Ill point out as an organization microsoft is looking at Artificial Intelligence and the role that standardization is going to play in a broader spectrum and recognize standards are one element of the ways that the Technical Community will look at inner operability and methods and practicing happening where people are working on exchanges. In the policy environment i absolutely agree and support the need for thinking about accountability. When you go down the path of accountability, principles or any countries around the world looking at this, we recognize the relationship between the regulatory approach and accountability comes down to the criteria by which you measure. That criteria is going to be fundamentally predicated on the standardization work done. Look forward toted panel and thank you again for joining us. Good morning, my name is Anthony Robbins and i work for in nvidia. Ive spent my career at the intersection of government and commercial industry, most of which has been with Silicon Valley companies. In my role at in nvidia, im kind of the person in the field trying to predict and guide and relates to ai, do the right work to interpret needs from in nvidia and commercial industries. So we were excited in february to see the executive order that was signed on ai. Ening if you look at that executive order, you might convince yourself that it was the first time an executive order has been signed on one of these Big Technology waves. Big waves being work we did with client servers early on to mobility to cloud to ai. Ai has one thing thats different than previous ways that came before it, that is its bigger than all of them combined. I will kind of grab onto something. So we do spend a lot of time as people pandas leaders in this Community Talking about some of the challenges with ai. How secure and robust is it . Where does the bias exist and things like that. I think its important for us as leaders and for us in this room, this community, to spend a lot of time talking about progress being made and things being done to help mankind and improve the planet and make a significant contribution. There are children literally, ill say that for High School Kids, but there are High School Kids making profound impact on the world that we live in and the work they are doing around ai. So as much as we want to go, you know, act sometimes out of concern, and we want to address standards and international and the role they have relative to national strategies, i think its really important for us as leaders to celebrate the amazing progress that is under way. If you look back into the history books, i guess the federal government has been touching ai since the 50s and we had major breakthrough in 2012. So the work thats going on here is the most challenging, exciting i. T. Or Technology Transformation of our respective lifetimes. So im excited to be a part of this panel and making a contribution. Great. Great p thank you. So the rush to develop standard for ai is new but the recognition of the importance of standards, the standards about the process is not. Weve seen this kind of cycle happen before with previous generations of technology. So what kind of precedent exists for how were going to be approaching the ai standards process, what other models can we look from or are there potentially new challenges we have to address . Well, i think if you look at historically the way standards have developed, lets look at like the computer era information wranlg we haveage w lot of Technology Developed and standards developed for those technologies. Historically the worldclass ideas that were creating innovations in that field were coming from American Companies. And so certainly we very much support the voluntary consensus, open, transparent industryled standards developed process. Because most of the inventions or innovations were coming from American Companies for the most part, the folks that were at the table conversations about technology, technical standards were primarily American Companies. And so that process that worked out well. But i think we cant assume that because its worked out well for American Companies in the past it will continue to work out well Going Forward, because now the landscape has changed, economic competitiveness has changed. We have strategic competitors in this space who also are recognizing the importance of technical standards. So i think the process that has worked well in the past for the era of information age, we want to foster that Going Forward, that open transparent, consensus driven approach, voluntary approach. We have to recognize were in a new climate now, so that Global Competitiveness now requires us to be more intentionally proactive in promoting that open transparent process so that we can make sural of our good ideas coming out of the United States have an equal footing. Were not afraid of competition internationally but i think the standards process cant just presume, in some sense, the federal government, not nist in particular but broader speaking hasnt recognized the importance of standards for ai because we presumed that the process in the past continues to work well Going Forward. I think the past the process has worked well, computer innovations means we dont need to let our guard down and presume Going Forward we dont need to be more proactive promoting that open process of promoting technical standards. So i concur that if we use history as a guide, look back and say how is it that u. S. Industry and ict sector has been so strong. Theres no question one of the underpinnings to it has been dynamic standardization environment. The dynamic standardization environment has been created keeping open all possibilities in full spectrum of approaches, everything from national processes to the international processes. And u. S. Industry has been adept at making use of that full spectrum depending on what it is youre trying to get done and at what time. I want to take a step back to my opening comments and recognize i agree with dr. Parker things have changed. I think they have changed in a fundamentally divot way than different way people are thinking about with ai. Open Source Software, youre going to see a new factor that will play in how inner operable engineering will be addressed. Engineers are going to move to a much more rapid pace of inner operability work by moving projects up into that environment. That does not take away from the role standardization is going to play but means that things are fundamentally different not in the way of saying other nations will beat us by standardizing first or something of the sort but a function of how engineers and contributors come to the table and lead with innovation and lead with ideas that bring about a foundation of understanding and issues that are transparent, that you can understand what the technologies are, policies can be built on them. But its not going to be a function of saying were going to create standards that favor one nation over another or put up barriers, because that is a misuse of standardization. Its a function of use standards as a way to make sure the contributions are out there and strong and leading the discussion. You do want to preserve that open system as much as possible following the principles that have led to the great success. So to me i think theres a great deal of merit looking at where standardization has been but recognizing there has been a fundamental change but not about who is racing to get the first standard. Because those of us in industry will tell you standards can be Market Makers but really products come down to the aggregation of sometimes hundreds if not dozens of standards. Its really about the layering of higher value work above and beyond beyond standardization thats about market success. I would put it in a slightly different context. Trying not to repeat anything thats been said. Theres a few things. What have we learned about standards efforts that have kind of come before us . Most recently would be the work thats occurred in Cyber Security where 160 Different Countries have rallied around different aspects of standardization. I think thats a pretty good model relative to as we think about standards at scale. Where the model may break down a little bit is it still is not at the scale that were talking about here in the complexity were talking about with respect to ai. Because we are talking about nations who want to create immense value for themselves. We are talking about concerns with the application of ai. So weve got to deal with that. The other thing is as i mentioned in my opening comments, when we think about the society and we think about trust and security, an important aspect of the adoption of ai will be the improvement of societys belief and trust in the technology and it alickability to be in ai for good. I think standards play an Important Role as it relates to that. I think nis is really important and theres a couple of things. We mentioned a couple of ideas around standards whether its open Standard Development to the progress weve made thus far in ai. The other thing that i would recommend for the federal government is its not just about the standards itself. Its about the use cases for which we may be interested in in the federal government. For example on the civilian agency side, when we think about waste, fraud, and abuse, theres Different Things that we might consider relative today a and citizen privacy than maybe on the department of defense when we think about platform sustainment or how we think about Cyber Security. Important in the last administration as one of the reports that came out that talked about benchmarks and standards and prototypes, i think its really important that we actually get started and build some prototypes and some Lessons Learned in use cases that relate to how the federal government might adopt ai and deploy ai because that may inform some learning that may inform our position on standards. Great. So i want to pick up on a theme that all of you addressed about this idea of u. S. Leadership and Standards Development. One of the issues we keep hearing about as we talk to folks in industry is that in the kind of International Standards body community, these organizations that exist, the presence of the Chinese Government is increasing. They are sending huge amounts of delegates that are coordinated. China picks national champions. They have clear, specific goals in mind. The u. S. Approach is different. We send government representatives but we send industry representatives and were not trying to champion a particular company and develop standards around them but create a competitive fair open standard. The concern we keep hearing about is china is much more effective about kind of potentially tipping the scales should they choose to favor them in an anticompetitive way to benefit their own domestic industry as theyve done in the past. So what is the solution here for the United States . How can industry and government Work Together to kind of create a fair even Playing Field for Standards Development . Im happy to start, but im sure everybody has got good opinions on this. Im going to be the contrarian on the panel. It seems to be my role today. I think theres zero evidence that the chinese have an unfair advantage in the International Standards system. The reason i would say that is the wtotbt principles follow very explicitly prevent or have rules in place to diminish dominance. And i speak about this as a very large corporate player. I recognize the dynamics at play in these bodies. As a country, as industry we would far rather have the chinese involved in the International System where they are engaged in one country one vote dynamics or in an environment where they need to bring ideas to the table like the germans do, like the americans do, like the japanese do, and argue the them out in a community rather than behind a wall in country and use things like their one belt one road or trade agreements to then impose their system on others as part of financial engagements. Those outcomes would be far worse for us than encouraging all countries to be at the table in an international dynamic where you have rules that are in place to protect prevent dominance and things of the sort. At the point, and i recognize that there is a big discussion in this country around 5 g and the role the chinese play there, but i want to broaden it out and say that i have people involved in incredibly wide array of standardization tech discussions and we have engineers where Chinese Companies are present. There is very little evidence that their that their contributions carry more weight than anybody elses or, in fact, get adopted. They do have bounties in the chinese system for making contributions. Theyre counting the number of contributions and the government makes payments. Weve seen wikipedia articles cut and paste and submitted to standards bodies to get the payment back home. That is not a path to success in a standards body. The chinese are learning. They are coming to the table with increasing equality. If we take the example of huawei, theyre hiring real, decent, strong standards approxima professionals from around the world so their ideas can be heard and listened to and engaged with. And there are scenarios where people are concerned with technical elements. Im not here to be an apologist for huawei. Im saying we should think about and protect that which has been incredibly important to our Economic Growth as a country which has been a standardization system that is fundamentally predicated on the wtotbt principles and have led to an incredibly strong outcome over a long period of time. So to the extent that its easy to point at a one player and say im concerned about them, i dont believe that that that the standardization system is the place for that. Market competition, investment by governments in r d. Expanded intentionality on National Security concerns much like were seeing over the past week. Those are government mechanisms that have no place for me to make comment on. I would say the standardization system itself will boomerang on us as an industry if we start using it as a kojl to beat other countries over the head with. I want to clarify. Youre right. I wasnt clear in my question. Were not seeing evidence that this is happening yet. The concern is it could happen, that we might not be well equipped to respond. Youre right. Im not trying to cast any aspersions on anyone. I agree the objective of having the federal government more engaged in federal standards is not to change the process, and as i said a moment ago, its absolutely to foster the open voluntary consensus driven approach. In the Cyber Security area i have read multiple reports that say and im not a Cyber Security expert nor an industry. But i have read in many reports that the challenge is not with the International Standards set process. Its as it relates specifically to china. Its internal standards that are forced upon anyone who wants to as an industry to operate in that do main. And so they have to operate under standards that are set by the nation, the local nation, not based on International Technical standards. The challenge then to industries, foreign industries outside of that nation is that now you have to make two technical standards, perhaps. One which is the International Standard which is the agreed upon process that we all think works very well and want to foster and strengthen. But the other is if nations dont want to go along with that, then they set up barriers for our own industries to participate in a very large market. Thats one of the concerns, and being able to in some instances it may be a political question of how do you encourage all nations of the world to abide by and participate in this International Standard setting approach that has worked so well historically in which we very much favor. And a couple comments. If you believe in the prospects of ai for good, then you would expect that every country has something to contribute. That would be the u. S. Just like china. I think in the case of china its a very big, powerful country, and theyve made very big and broad commitments to ai, and sometimes people worry about that which they dont know. But the development kind of where we are right now with the development of ai for good, this this is a global team sport. Its going to require Big Companies like microsoft and nvidia and Higher Education institutions around the world. Its going to require those part of the Industrial Base and those that are part of the innovation cells. Theres a lot of work. As i mentioned earlier, theres actually few ai standards at the moment, and many of the thinking is kind of preliminary. And so i think we need to be careful not to cast all this fear, uncertainty, and doubt on the prospects for ai for good, and weve got to let this innovation cycle go. Weve got to let it take hold. The point of the matter is we have a bigger Human Resources challenge in ai today than we ever did in cyber. You know . Its getting bigger and faster. And so we require global participation in this, and ill paraphrase in part some of the conversation i had with a senior government leader, and she said that she was actually less worried about any particular country and more worried about the United States. And so i think this nation and this country has done extraordinary things for this planet and of course for our country. And i think if were focussed critically on our ability to contribute, well just be fine on the global stage. Great. So just picking up on something you just mentioned. Talking about the standards themselves about how much work there is to do here, how mature are ai standards today . Have we gotten the basics out of the way . Is there a lot more work to be done. Are there particular areas where weve seen earlier success that we should look to as we flesh out the entire standards echo system . Want to start . Were really early. Were really early. Theres, of course, theres been several meetings, and weve mentioned the sc42 and theres four sub committees that are a part of that, and theres work thats going on. You know, here in the u. S. , or weve done some work around benchmarking which has been an area that people want. Its some kind of standardization around it so theres an ml perf benchmark. Theres open tools things called onyx. And then theres the set of theres a set of meetings that have occurred on standards around the globe, the department of defense as a part of them and civilian agencies are part of them. But its really early. I would say its really early, and its really in the formative stage both the committees, the leadership, the roles, and the like. But i will say i think its really important that i think we could be more active in the work thats occurred thus far than we are today. So i would encourage that whether its nist on behalf of the federal government or federal government and eco system partners, you know, my own company included. I think we could be more assertive on both the international and National Stages as it relates to standards work. I also concur that were at the front end of a lot of work. But ill also take a moment and look back and say as we talk about something critical like the sharing of data sets and the ideas around open data sets and all those discussions, there are dozens of critical existing standards around data, data formats, interchanges that are all going to be critical to the work of ai. There are standards around Cloud Computing that are going to be critical to ai. We need to recognize that as you inventory what is important for ai, dont just throw out the baby with the bath water. You have a number of things that exist today that are essential to that next step around ai that well talk about grabbing hold of large data sets and understanding them. You then have newark. And absolutely were at the front end of that. I would say the definition work at jfc 1 is the place where nist is active and the federal government is engaged with industry and we strongly encourage that to continue rather than doing something repetitive and writing definition terminology, use cases, Reference Architectures here separate from everybody else. Participate and stay engaged where everybody else, the experts are already working. And then we talked about open source. Its really interesting to see whats happening there. You just mentioned onyx. Onyx is a great example to Neural Network exchange format, but the crtheres a standardize form of something similar. Competition is good. Youre going to see different formats going after Different Exchange or interchange ideas that are going to show up either hosted by consortium or foundations that might crop up around a certain approach and theyre going to be in competition not only with each other but with the formalized requirements that might come through a standards process. That push and pull, that tension is essential where the market has a funny way of then rearing it head and saying i like this and not that and one wins out and thats a critical part of the process. Weir at the front of it, but we also have huge existing swath of work thats really important. Yeah. And theres always this question about the timing. When is the right time to push for a standard in a particular area . And ai is still very new. But also people are concerned about issues of governance and oversight. You cant govern and oversee if you dont have good ways of measuring. Is the system fair . I dont know. How do we measure if its fair . Thats a good area for digging into technical standards. Theres this concern that if you standardize too early then youre missing out on good ideas that are still coming down the pipe because the area is still new. But it can also really accelerate process in the field. Im personally a robotics person. Giving an example that ties into the open source point. The nominal middle ware standard was when the robot operating system was introduced, that created not only a standard in the open Source Community for how you build robot systems so you can reuse everything. It used to be prior to that you had the recreate, you had to write it yourself. If you want to build a sensor interpretation system that was able to understand a camera image of some sort, you had to build it yourself. But robot operating system allowed the community to come together and interchange all those parts that the community had built so now we can all Work Together and accelerate the field. I think very critically defined and critically important areas of standards work can accelerate the field forward. I think thats of their key questions for this plan that nist is working on is how do we prioritize the areas of standard work that will be most impactful in a good way to the development of ai for all of our goods. Can i make one comment on that . Yeah. We havent talked about it, but its worth noting and thinking about. When we think about these big four give waves of technology, client server, mobility cloud and now ai, theres one thing thats distinctly different and unique about whats occurring in ai, and theres many, but one is that do mock democracy of the industry. Cloud computing y. There arent that many world class cloud companies. Are there ten that dominate the worlds cloud . But in the case of developers around ai, theres millions. Right . And theres going to be a lot more. And so the access to the technology, i think is something that i think the world will benefit from. But its something to Pay Attention to, because its a very the development around ai is very different than some of the Big Technology waves that have come before it. I want to get back to something that lynn was saying that i think is really important. Theres different kinds of standards. Theres Different Things that get standardized in different ways. There are definitional or foundational standards that you can go after and those things can become the basis for Public Policy conversations. They become the basis for internal procurement practices. Theyre just simply what does this word mean . How are we going to use it and can everybody agree with the same use of it . You then get to technical standards much as the example on robotics is a great example. That can be either formalized into the standards body or de facto by nature of adoption. Enough people are using it and it becomes the means by which common practices are set. Then were getting into this new frame for the i. T. Industry in particular. Theres been this push toward Management Practices or government practices that are standardized for the purpose of audit. I see gordon sitting out there. And theyve done great work to help people understand the cascade of conformity and conformity assessment for an industry thats moving to become more regulated we have a need to be able to declare what is it the criteria that are important so if we talk about trustworthy ai, what do you mean . We can talk about algorithm transparency, but if we crack open a Neural Network algorithm, you wont understand it. Even people who write it dont understand it. The tools dont exist to do that. You have to talk about transparency rather than in a different way people move to explainability. You Start Talking about the source data, how did you treat it . How did you establish appropriate Development Methodologies and then after what did the thing do . Can i describe what the outcome was and how do i track that and measure that . Its going to be a spectrum approach. Those arent going to be written in code. Thats going to be english. Or whatever language. Someone is going to sit down and write out the behaviors and measure. Thats a standard. But then you get to internal operational standards which dont get standardized but if you take a Large Organization like the u. S. Government and you start putting in place behavioral requirements for agencies or in Large Companies for procurement or anything else, those become standards that have market effect as well on supply chains or large procurement mechanisms. And so i think that we it really behooves us as we take a look at standardization for ai to keep in mind that its going to be a much broader picture than whether or not theres a protocol for big data access or some sort of magic protocol for ai transfer. I dont think thats going to be the dynamic thats going to play out. Great. So i want to pick up on that thread and something that lynn mentioned about the concern around the use of standards for things like transparency or safety or you most often hear it described as ethical ai. Thats the most common buzz phrase in Public Policy discussions about ai. Do we know what technical standards for ethics look like . Is it more useful to think about it in Component Parts . Is this kind of a meaningless phrase . How is that playing into the debate . I understand if no one wants to jump on that one. My team is tired of hearing me go on about this, but ill share it with you because now theyre not here to stop me. If you talk to an engineer about ethic, or utilitarian ethic or different ethical models, theyre going to look at you really blankly. Its not something theyve studied. We have a need to have people who are true truly trained ethics people, people who have a basis for the discussion, and i have a strong opinion that there are things the Engineering Community absolutely can do. You can describe in an engineering context the systems and you can say okay, take Something Like bias. Statistically we can you can map out in statistics models that theres bias in all data, time of day you collect it. It might have to do with the race of the individual, but it might be we were collecting data from hour hospitals and i collected more data from this hospital than that hospital. There are things engineers can address in the description of how do i think about bias and systems . The ethics of fairness of that bias is going to be it needs to be talked about by not the Engineering Community but by people who are deeply steeped in those issues and then the reaction from the Engineering Community is one of responsibility for what gets built and how they build it against those models but we have to find a joining of the conversation to say its not going to be some magical standard thats written that defines ethics. I guarantee you the french see it differently than the saudis and differently than the australians. You dont want to have a standard that exports one ethic. It will get ignored by everybody. But good engineering practices that support the application of strong ethics no matter where youre applying them is a different discussion. Yeah. And i might add that all engineering teams that are good are having the conversation. All customer meetings that have rich Technical Depth to it, this is a part of the conversation. And i think i think a part of it is this first stages is acknowledge that this is an important thing. And so if you follow the federal government, theres all kinds of work going on as it relates to ethics. I know, for example, the executive director of the Defense Innovation board has been with a group of folks, has been to austin, and silicone valley. Hes conducting listening sessions. Where theyre listening to the community or to companies or innovators or education institutions and the like. So just like most aspects of ai, were still early. But i will say as a company thats playing a role here, the conversation occurs every day across many aspects of the business and at the intersection of customers and use cases. And i will add, too, that these challenges of ethics also go into areas like safety. If you think about the safety of a Health Care Device thats aibased, you have to perhaps involve this Multidisciplinary Team that includes the Health Care Providers that can know what it really means to be safe. Its not just about the ethicist and the engineers but its really a multidisciplinary activity across all the applications of ai. Thats the real challenge, often getting communities to understand each other. I cant tell you how many meetings ive been in where we have people trained in one discipline and they would talk and people are trained in a second discipline and they talk and they seem to have no understanding of whats happening with each other. So i think this then gets into the Educational Area where we have to begin training ourselves to have more of an interdisciplinary focus so weir not just a Computer Scientist or just an ethicist but we have both combined in multiple areas. Does this mean that my son who is taking liberal arts is going to have a job . As long as he takes some Computer Science classes. I was a liberal arts major. You might want to text him. So we talked a little bit about the industrys role in developing these standards. I want to build on that a little bit. A common refrain you hear in the Standards Community is standards are like toothbrushes. Everyone wants to use their own. I understand why a company would want to use their own standards that they develop. They built their products around it. But we know that the mutual adoption of common shared standards are better for competition and better for consumers and better for innovation. Can you describe how industry is kind of playing nice here . What are their incentives to cooperate . How are they engaging with each other, and lynne, id like to hear what you hope the role of the federal government is in fostering this cooperation. Do you want to start . I want to hear from you. All right. Standardization is by definition collaborative and competitive at the same time. Everybody who walks in the room into a standards body is doing so on purpose. Nobody has ever tripped and fallen down and made a standard. Never happened. You show up with contributions or you show up as a participant in order to work with those and then over time People Choose to implement. We would always advocate for a rich competitive environment both of standards bodies and of standards participants. That competition drives the best possible outcome which is those things that moves to the top. My organization has been involved in standards that have been successful, and we have been involved in multiyear standardization efforts that are failures. And that is part of the process. And sometimes those dynamics that play out have nonlinear pressures on them so you could look at vhs and beta max as the classic example where the less good technical standard won out because of other factors. You know, you just we want that richness. That is exactly what has propelled the Technology Industry to be so strong, but ill take this a step further and recognize that these technologies were talking about now are not about the Technology Industry at all. This is about ubiquitous technologies that effect every sector and every sector is going to turn on it and do their own things. The equipment manufacturers are going to have processes that theyre going to deal with, and the health care folks and the automotive folks are going to do it, and the way we break out of the recently espoused opinions of kiafu lee, is we reject that notion. We reject the notion because really what it comes down to is can you have the can the competitive nature of Industries Come to bear but also bring to bear all of their resources and the pool red sources around data, for example, and do shared learning . Thats going to be a new model for Many Organizations but standards at a vertical level and standards at a horizontal level will assist, and thats the way you break out of the concept of a winner take all concept. Ai standards are going to exist. Period. Theyre going to exist. Theyll take some time to develop, and most countries of consequence are going to participate gainfully. As was said, this is one of those technologies where every person in every company or every country is going to be affected. And we hope affected by ai for good. So if you i said this before. If you believe in the prospects of ai for good, and i think that the world either does or wants to believe in that case, then youll be open to the notion that standards are important to that whole process, that whole development cycle. And so i think that will occur. I think its really important that the government, in our case our federal government plays an Important Role, but is also careful. Right . Its also weve talked a little bit about politics. Weve talked about Global Competitiveness and the like, and so i think if i look at the rich tradition of nist and the contribution that nist has made to the federal government and to the world that we live in, i think the best work that nist has to do is ahead of itself in the case of ai. Im certainly hopeful and confident that we get there, but i think but i think we have to make sure that nist and the standards bodies that come out of the federal government are careful, especially early on, because if theres this exertion of power and control from quote, unquote, the federal government, the International Standards thing breaks or takes twice as long as we might desire. Yeah. One of the great advantages, one of the great opportunities that the federal government provides is the opportunity to convene people and to convene important stake holders to come together and make request of the federal government as to ways the federal government can help. Thats really the point of this directive and the executive order is to find out from you, from the community, is the federal government doing what are areas in which the federal government can be more engaged . Clearly nist, were very fortunate as a nation to have nist here. Thats been an honest broker working with industry, but not the the government does not set standards. The Government Works and works hopefully together with industry in order to help further the industry goals of standardization. And so i think the Important Role of the federal government is to listen, to convene and to listen and act on the good information that we get from the stake Holder Community on how we can help more beyond what nist has been doing valiantly over the last many years great. I have plenty more i want to ask you, but i promised folks an opportunity to ask questions. We have just under 15 minutes left. It looks like there two mics on the side of the room. Could you please introduce yourself and then say who specifically youre asking the question and dont do the terrible thing of making a statement rather than a question. You mentioned theres two main thrusts for ai standards, one technical standards and of course centers of responsibility and evidentics. And the great overlap is explainable ai. You could standardize things like algorithms and basically levels of explainability. I know that darp is working on aspects of explainable ai. Im curious as to the standardization you can place at nist on Different Levels of explainability. Im thinking about standards on accessibility. Its a parallel where you could have standards on accessibility, you have standards on things like explainability. Just thoughts on xai and standards and nists role. I think one Important Role is to do the r d that informs a lot of the technical standards that need to be developed. Explainable ai is a perfect example. The technical approaches to achieving the kind of explainability we want dont yet exist. Its not that nist isnt doing that particular r d, but there is an Important Role in doing the r d thats necessary to inform the development of technical standards in these areas. I think again it gets to the question of timing and whether or not we know enough about how to accomplish certain kinds of characteristics like accountability in order to set up a standard. But you can begin looking at what are the kinds of technical studies, r d studies that need to be done to help inform that process. Theres a lot of questions. Ill go quickly and say that i think at the heart of your question is an interesting piece. I dont think we should talk about standardizing algorithms. That talks about standardizing innovation and that is a way to create an innovation dead zone. But one of the parts that nist is trying to think through right now in the process is tools. And is there fundamental research thats going to go around explainability to the tools that can be used to understand what something is doing dont necessarily exist. Theres a huge piece that will happen there. Its not about standardizing the tool. Its about Building Tools that help you understand possibly what behaviors need to be articulated as a standard or creates an opportunity for other technical standards that might come from that understanding. But i would be really careful about using the term standardizing in algorithm. You dont want to standardize innovation. Theres a sure way to fall behind. Thats picking a winner, and you want just a lets ra of people out there competing to drive innovation forward. Yeah. I meant it was levels of explainability like levels of accessibility. Good answers. Appreciate the responses. So i was mistaken. We have until 10 45. We have plenty of time for questions and then i want to have a couple closing questions. Thanks. Director copen mentioned the work thats happening right now with the privacy framework. It seems like the it might be cutting it close, but it seems like the idea is to get an ai standard into the privacy framework as a reference whether or not that is the idea, can each of you sort of address the appropriateness or importance of an ai standard that nist develops getting into that privacy framework . So im not familiar with those conversations. I think you know, the terminology of an ai standard, i think is not what would show up there. And so i think certainly privacy is an important aspect and the use of data broadly speaking, ai is very data intensive. Some types of ai are data intensive. Clearly theres an important connection there. There are many privacy issues with data at large that arent necessarily ai issues. Theres an overlap, but i cant speak to what nist activities are in terms of connecting the two. I guess my thoughts around that is that the privacy discussion is a discussion of prescriptive regulations worldwide. And so if youre working on ai, you are that does not abrogate your responsibilities to meet the privacy laws, and so what we have to do, there is a really interesting conversation to be had, for example, around the cure ration of data sets and the responsible behaviors or responsible use of data. That has to be taken in context of the prescriptive laws that exist now. Those become the measuring stick by which were all held accountable. So i think that to the extent we think about privacy, we do have to factor it in. Now that you have Cyber Security laws coming in around the world, some of them more prescriptive than others, companies will be held to the same requirements in fact it will be a balancing act of understanding what they are and standards, ill come back to the point i made before, can become a method by which you establish the criteria of Good Behavior that underpin responsibility but its not going to be standards dont answer everything. They give you a piece of the puzzle. Yeah. I think you touched on an area where you have to be careful. Because i think if you refer back to the oacd principles. The second one they talk about is the rule of law. I think we have to be careful about how does the rule of law exist on an international or National Stage and be careful not to bring some of those things into standards too early. Hi. Mary saunders, American National standards institute. Well hear from the panel about federal agency needs. I wanted to get this panels perspective on federal engagement and tools and activities. Federal agencies engage on the basis of their Mission Space as im sure you all know. And they engage in some cases where its a Regulatory Agency to support a decision regarding regulation or not regulating a Procurement Agency decision as to what to buy. And nist is unique within the federal agency space. Its mission drives a wade range of participation and centers activities. Thats its mission. Youll see 25 of the technical staff at nist participate ranging from open source to voluntary consensus and the range. You will not see that at other federal agencies. A few thoughts from the Panel Members about the Different Levels of participation and different considerations for federal agencies in engaging in standards activities. Go ahead. Go ahead. All right. Ill go ahead. From an industry perspective and being on the outside looking in, and i mentioned this earlier, i think you touched on it. There are different aspects of standards when we think about kind of what the Intelligence Community might do or what the department of defense might do or what on what civilian agencies do with respect to citizens services. Thats why i recommended earlier that i think we have to look at some of these really big news cases and weve got to create some environments where we can test these use cases out and learn from the use cases. Because whats occurring in Cyber Security, of course that relates to the entire federal government, but waste, fraud, and abuse, 141 billion in 2016, i think was the last time that it was reported on. Thats a big challenge to go after, and the thing that the way that standards relates to that might be different than the way that we worry about platform sustainment for the department of defense where the big issue there might be more data rights that relate to the Systems Integrator or the department of defense who might own it. I think there might be some interesting value to be gleaned by looking at use cases as associated with different aspects of the federal government. I think one of the challenges we have is an education challenge and an understanding of what the impact is of technical standards on the missions of all the agencies. I looked back at some testimony i believe it was of walt copan and looked at the comments of why its the benefit of the u. S. Government for us to have a vibrant Standards Development government. The benefits to the u. S. Government are things like eliminating costs to the government of having to develop it own standards. Think of all the things dod purchases. You can imagine there are a lot of standards not just dod but to use that as an agency that does a lot of acquisition. It decreases the cost of goods. Provides incentives to serve National Needs and encourages longterm growth. Promotes efficiency, economic competition, trade, furthers reliance of the u. S. Government on the private sector. Those are a lot of great benefits that all the federal agencies would want to experience. I think theres not a broad understanding of how technical standards can feed into that. The fact that these are advantages to the government itself from having a robust technical standards that are appropriate. I think at a minimal level this hopefully hopefully this directive and the executive order will help raise awareness across the agencies not just nist but the other agencies of areas in which their engagement could contribute significantly to furthering the Technical Standards Development that actually benefits their own mission. Their own agency. Its a longterm perspective. This is always a challenge that people are busy with the here and now and dont see that the longterm also pays off. But i think thats an important part of this directive and the executive order is really to raise awareness across all the Mission Agencies of how they can contribute and participate. And if i could just jump in quickly, i want to emphasize the importance of the executive order included mentioned in a 119. This document is essential. It says for the federal government they should use and participate in industry standardization. Voluntary led industry standardization is the corner stone of the standardization system. But what it means is you mentioned before, convening discussions like this, an essential role the government will k play, but participate. The agencies themselves have people with real needs and understanding of how they want to deliver government services, whatever that style, civilian or defense. How are they working on those and the use of ai within that. But then how are they establishing policies and regulations. And then how are they supporting u. S. Industry interests . There are two items i put on the do not do list. And the do not do list includes dont put your thumb on the scale. Participate but dont pick winners and secondly, dont use standards as a trade barrier. Theres a fundamental misuse of standardization. I am curious. Did we answer your question . Yes. Good. I want to touch upon one of the points jason made and lynne spoke about. Its a fundamental question for this discussion. What is ai, and is ai something new completely or does it build upon existing capabilities . Does it build upon my view is it builds upon the capabilities we already have for modelling or risk management, testing, development, training, security, and if you take that track, then you really look at what does ai then add . Ai adds machine learning. That creates a whole set of complexities and a new dimension and you have to bolster the existing capabilities but more fundamentally, does it push us in the direction of an ethical discussion. When you talk about bias, transparency, explainability, security, security, privacy, the whole set of those dpengss. Even though they are not new, i do think from a consumers perspective, i think the equation changes dramatically when a decision is being made by ai versus nonai system. I think it goes back to sort of raise this conversation back again to the points youve made, i think it really starts by us saying what is ai . Is it completely . Are we going to treat it separately or is it part of a number of parts that already exists and then whats missing . Ill make a quick comment. Is it new . No. No. Weve been the federal government has been on record of doing ai work for 60 or 70 years. As i said earlier, i think 1957 is the earliest date ive seen. Whats new and different about it is the amount of developers and the access to the technology. Because it used to be, you know, 40 years ago or 30 years ago that you would need super computers. Right . That took up whole buildings and ph. D. S to do the kind of work that was required to build out these Neural Networks. Today people are doing those on commodity parts. Theyre doing ai on commodity parts. Kind of like the democratization of ai has brought the world into potential developers, and has created the application of ai in all kinds of use cases ai for good and otherwise. Is it new . No. And to build on that, i think the fact that weve accelerated to the point that now we do have lots of use cases oai, that then brings into the question of whats under the curtain . Whats in the black box . And some of the techniques particularly the deep learning techniques that are very popular today that have contributed to these use cases being successful now, that technology is a black box. And so i think that in that sense because were using it frequently and we cant look in the black box, that raises these questions about well, is it being used appropriately because we cant really understand whats going on . A lot of the techniques that worked in the past say to verify a system based on some analytical understanding about the underlying mathematical model are not applicable in many of these new cases for a variety of reasons. And so that i think has caused this stir now. Thats kind of a new piece is the application of techniques we dont fully understand how to prove that theyre doing what theyre intended to do. Yeah. I completely agree with that. Yeah. Theres a lot of things that exist, and theres a number of new dimensions that need to be catered for that fundamentally could change the game. Yeah. Thank you. Yes, steve harrison. Department of defense. Joint ai center. A question id be interested in the panels view on the role and type of standards to promote aggravated collaborative distributed Artificial Intelligence intellect. As someone who has spent a year doing multirobot systems i appreciate the fact that you do have when you have different kinds of systems if you want to build a large scale system that can interoperate and talk to each other clearly, standards are critically important. Because otherwise the systems dont know how to talk to each other. So i think you can do that, of course, through one off kind of implementations but it doesnt scale well. You cant easily add new kinds of systems into a large scale distributed team. I think in order to have interoperable capabilities, standards are very important. The only thing ill add to that is, again, broken record on this. It may not come out of a standards body. It may come out of a get hub project today. No less, im going to use the word broadly here, a standard, a normalized approach, but it might not be coming out of a Standards Organization with membership and governance and all the rest. It may be coming out of an open source project. We have about ten minutes left. Please keep your questions and answers short so we can get everyones questions. In the spirit of algorithmic fairness. Over here. My name is collin. Im with the national Cyber Security center of excellence. I have a question about adversarial machine learning. It seems like an area that is ripe for the sort of standardization efforts were talking about. And its really a concern in safety critical systems. Im wondering what efforts are Companies Like microsoft and nvidia taking to mid gait these sorts of risks that might be developed into standards or tools that could encourage safetyminded Ai Development . Thank you. So i will answer briefly because i am not an expert in the area. What i will say is that we have active Research Going on in the space. The starting point frankly has been threat modelling and understanding even what the picture looks like before you get to actions you can take. The other step is then to start looking at your Development Practices and start putting in place things like in the past you would look at a secure Development Life cycle approach where you try to think about all the steps and make sure youre training your work force appropriately of the things theyre doing. But im going to stop there and simply assert that you have put your finger on an incredibly important point. And one that is needing a great deal more research and consideration. I would offer the same. If you look at the jake, i believe the jake is going to have a National Mission initiative around cyber. If you look at whats happening across u. S. Cyber come, i believe even before confirmed, they talked about the importance and relevance of ai to the future of cyber. Vice admiral norton has made similar statements. This is a brave new field, and its really interesting, because earlier when we mentioned that some 160 countries have brought some kind of standards to their country relative to cyber, the worlds about to change. Right . And so something promoting the change is 5 g and more. So as we go from hundreds a millions to billions and trillions of devices on the networks, the networks are getting faster. Theres more data and more new things. So for us, we spend time. This is a giant Data Analytics challenge. So weve been working on actually developing some of our own tools that allow us to get at this big Data Analytics thing that relates to cyber. Were still very early in that, and so our best contribution from nvidias perspective is figuring out how to partner with Systems Integrators and others as they partner with the federal government. Cyber come as a dream port initiative. Theres ideas to do that, but a lot of work to do. One of the areas im frequently asked about is what are areas that we can we as a nation can work on collaboratively at an International Level. I think as it relates to ai, this issue of how to build safe and secure ai is an issue that has international importance. Even with our adversaries. We want them to have safe and secure ai so we can have trust their ai is not going to do something crazy that sets off a chain reaction. Similarly they would want that for us. This could have Strong International cooperation and engagement. Hi. Im andrew ferris. One of the thing that both anthony and jason highlighted as being important to development of ai standards is open Source Software and projects. I agree with this statement very much. And im really wondering how folks working in the open source can best shape their projects to contribute to the standards process, and then conversely, how the federal government could encourage it in both the private and Public Sector. The federal government, i mentioned the example of ross just because robot operating systems, but the federal government did fund some of the development of that open source work because its just built by the community. So and thats one example, but the federal government does try to provide some resources for these types of open source projects that are clearly having a broad impact. I think open source is one of these very large topics where people love to use the word the community but its a community of communities. If you look at the most significant projects, they are heavily funded. A lot of deep research from private organizations, Public Sector is engaged. And so one thing on the responsible Ai Development practices is going to come down to each project and you look at different foundations an organization like apache can put in place their own practices if they so choose. Thats a choice they have to make. People coming to the table in open source have to also have that sense of responsibility, be they private or Public Sector participants. I cant underestimate the importance that open source is going to play relative to the nature of innovation, development, and interoperatability within the context of future e americameri as ai. People cant say i was open source, i cant feel to follow the gdpr. Theyre not going to have an appetite for that in europe or the California Law on you know, the california privacy law. I didnt follow it because it was open source isnt going to be an excuse. So the open source communities themselves have to take on responsibilities and when i say that, were part of that community. Government is part of that community. Everybody is who is acting in it is part of that community. Great. So it looks like we have two questions left and about hour minutes. If i could ask you both to answer the questions, panelists can answer. I am george carlisle. My question, my first question is many multinationals when creating products and Services Work actually this question is for jason. They collaborate inside of the International Standards organization to minimize resources and not recreate the wheel. What is their progress on creating ai standards and how can we do it different or what is missing from what theyre creating . And my other is can you give us advice for our breakout groups . For our breakout groups . My question is weve used the term standards in many different ways in this conversation, and actually, a few of you guys highlighted there are different kinds of standards. Its a definition measurement, policy governments compliance, process, and then you also de facto tool standards. My question is given all of these various kinds of standards, would you prioritize in the plan that we would try to generate, you know, how would you prioritize the work and efforts and our investment in those various different types of areas or would you say we should just try and approach them all or what levels are we on in terms of ai and trying to address the various areas . Thats the question. Thank you. So ill try and answer quickly. The International Standards process is inherently collaborative in nature. And organizations go in there not only to be more efficient but also to participate in the discussion. Its also important to note that standards are a process that leads to compromise. And so nobody walks in the room and says heres my standard. Everybody has to listen to me. Theres debate and back and forth. And good participation and strong parms is about listening and engaging. And there are times that people push hard for their ideas and in fact, anybody who has been in the standards world knows of meetings where people have been yelling and screaming and you get the fireworks but the reality is the process is designed to encourage that collaboration. Its a deliberate act to be in a standardization committee. Everybody has an agenda. Thats why youre there, but participation is essential and getting people in the room who are experts and collaborating there is important. Ill let the other two take the second question. I think in terms of prioritization, for me its all about impact. And looking at whatever the kind of standardization activity is, what could it lead to to help us economically, to help improve quality of life, to help with lots of different kinds of applications. I think its not about the type of standardization approach. I think its more about what is most impactful in a positive way that can move us forward. Yeah. And to address your question about feedback for the breakouts, you breakouts i would just say just acknowledge whats going on and the train has left the station and its transformation of our respective time lines and one of the most important work that we have to do as leaders and professionals in the business, if you look at whats going on in the initiative and whats going on in the department of defense and whether they stood up to jake and amis and cmis and the executive order. I think the federal government has indicated financial commitment to and the like relative to a. I. And what i would challenge us on is how can we play a role as leaders in helping this thing go faster because thats a really important aspect. I know a lot of times thats an overused term, but we need to go faster as a federal government and as an American Public than we are today. We have the tools to do that, and i think your leadership would be important to inspiring that journey. Great. So im very proud of everyone. Its 10 45 exactly. Before we turn it over back to al ham i want to thank nist for hosting us. This will be Available Online. Please feel free to visit our website at data innovation. Org and we are doing a lot of work particularly in the oversight and government aspect of a. I. So thanks to nist and thanks again to our panelists. [ applause ] our live coverage continues this afternoon when Admiral Craig Faller from Southern Command testifies on the National Defense strategy. Hell answer questions from the Senate Arms Services subcommittee. The hearing starts at 3 00 p. M. Eastern. Tomorrow, Federal Reserve chair Jerome Powell will testify before the House Financial Services committee on u. S. Monetary policy and the state of the economy. Watch live wednesday morning at 10 00 eastern. And on thursday, the Senate Arms Services committee will hold a hearing on the nomination of general mark milley to be the next chair of the joint chiefs of staff and the committee will nominate him to the appointment of the greater general. Watch live thursday morning at 9 30 eastern on cspan3 and all of the live coverage is Available Online at cspan. Org or listen live on the free cspan radio app. There has been discussion about and appearance before congress, any testimony from this office would not go beyond our report. It contains our findings, analysis and the decisions that we made and we chose those carefully and the work speaks for itself and the report is my testimony. I would not provide information beyond that which is already public in any appearance before congress. Former special Counsel Robert Mueller is set to appear before two committees of congress on wednesday, july 17th. At 9 00 a. M. Eastern he gives testimony to the House Judiciary Committee and later in the day hell take questions from the house intelligence committee, both open sessions. Our coverage of Robert Muellers congressional testimony will be live on cspan3, online at cspan. Org or listen with the free cspan radio app. Next, from an event on race and technology, two professors look at the late of the research into ways racism has impacted Technology Research and

© 2024 Vimarsana

comparemela.com © 2020. All Rights Reserved.