Smart people i get to work with on a daytoday basis. Applying technology to solve problems. That spans a pretty big mission space. Take a look at daytoday, there is a lot of science that goes on in there. In the atmosphere to the ocean and all things in between. A lot of that work, a lot of the ai work going on now touches all of that. We look at things like fisheries to identify fish species, we can do more with augmenting scientists in the field. Theres also interesting projects identifying things, understanding the marine animal population from data that we have available to normal course of business. Theres a lot of other things that we do with ai. We are moving into the forecasting realm with it. Operations determine it is not ai. Predicting projects, predicting precipitation science. Using things like ai, interesting content. But there is a lot of work that can still be done and be worked through on aiding the forecast. In the news lately, there has been a lot of extreme weather events, hurricanes now. A lot of folks are talking about a lot of those events, they have uncertainty with them. The forecaster has to do a lot of work to deal with uncertainty. What we are doing now is running multiple copies of the model to understand where points of uncertainty may be. 10 or 100 copies, depending on what theyre looking for. Ai is looking to reduce the number. The other side of it is data assimilation, basically prepares operations for use in the models. We can get data into the forecasting pipeline at the right time. The longer you can delay it, the better product that you have. We could automate the processes to help augment a lot of the folks in the field. Data centers and offices around the country to really get the job done faster and better. The other side of it is extreme weather events or weather events in general, we want to make sure how do we augment people near a wildfire or hurricane . Ai can be added to those devices, drones, sensors, whatever they are. You can get a lot more data in more situations. The temperature of a hurricane is something you cannot measure. The result for the observer is probably bad. If you have a drone, significantly less. You get better measurements. So a lot of applications across the agency. It is one of the places we have a lot of benefit. People it further, what can we do in the future . This is speculative, but i think it can help us work with the public more. Whether data is complicated. I am not a scientist, i rely on my friends that are scientists to interpret that data. But there are things we do on a daytoday basis that are affected by the weather, travel, if you are going to buy a house, what is the flood risk . Other risk for weather events and being able to interact with that data in a better way, have a conversation. That sounds very futuristic, but these are the kinds of things, the kinds of technologies that will allow us as consumers to potentially interact with the data and understand it better. Better understand the flood hazard to your house or neighborhood. Really make it a more digestible product for everybody. A lot of the goals within noaa, how do we make things more understandable . You talk about products from seconds to minutes to hours. If there is a Tornado Warning or hurricane watch or whatever that may be. Understand to take action when we need to. I think these tools will be part of that in the future. I think theres a lot of potential. Currently, i think the goal is to make that data more accessible and usable not just by a scientist, but the public and people that we serve. That was a very comprehensive overview on noaas Larger Mission regarding data and technology. I would love to drill in a little more. He briefly mentioned more innovative to collect meteorological data that noaa relies on. In addition to Drone Technology i believe you called it may be edge technology. Does noaa have any other emerging novel, innovative ways to collect more enhanced and precise data . Every ai model that noaa would like to train anymore predictive capacity will need a huge variety of data to learn upon. Theres lots of opportunities and i think it spans the mission space. It comes down to the ocean and there are lots of places you need instrumentation to take observations. That is one of those things. The other side of that is there are scientists throughout the world, right . Getting to them is hard because they are more isolated because those areas are isolated. Being able to supplement the technologists or now, with technologies that are available, break those things down to something where we can they can have that in the field. Start the processing out there. Start your processing at the edge and by it the time it gets to the middle, you have a richer data set to work with. Ai is part of that and part of that is the architecture they run it on. Working toward that edge to core i do not want to sound very marketing. How do we get the data processed and turn it into product faster . On the others of that, having smarter applications that really make that engageable. It is not just a forecast or a data set you can download. But really have the tools for folks to interact with it. I think the goal is to make sure it is usable, not just for the folks are meteorologists, but everyday citizens. I am glad you brought that up. That interoperability of data, if i recall correctly, that is something the Biden Administration has wanted to bring into the modernization of the government. Be able to have a lot of the quantitative material be accessible to a large audience. Noaa is carving out its own little territory within that. In that vein, you mentioned we are witnessing a slew of natural a very Severe Weather phenomena. I do not have to tell you about the maui wildfires, canada reported several devastating ongoing fires in addition to the ones that we saw before. One of the main Canary Islands is suffering from very eerie burns, parallel to what is going on in maui. We read about these incidents so much in the headlines. You are talking about a more interactive way to communicate the data. Which brings me to a bit of a general question. Does noaa have any plans for a were upgraded and enhanced type of User Experience related to these disasters or other predictive weather analyses . That is a good question. I will start at the prediction part. I think one of the drivers of noaa is the social science. How do we get people not just to interact, but take action . Producing, observing, processing. In any event like a wildfire, it is important. The preparation period is not very long, unfortunately. They can move very rapidly and catch people off guard. You have to find out where the fire started. Caveat to this just as a technical practitioner view. The event itself, the downrange weather. It is a complicated thing forecasting a model. It makes the presented presentation of data more complex. It involves a greater number of people. There are other agencies. I think the drive on the social science side, you can see it in media and other places where they talk a lot about how we change the language to get people out of harms way. I think that is part of the data part, how we interact, how we make it understandable. That takes more than language. That takes data science individualization science and visualization. There are initiatives to make that bring it to the forefront. We do that to a certain extent with everything. Having things where you can work with the data and play with it to understand it better. There are ways we can do that. Showing an animation of the storm or a chart, the things you see on your Weather Report are important. But really understanding, especially the foreign event. How do i prepare myself . How do i make sure my family is in the right place . Buying a house, what are the investor risks . Those are the kind of things that will help folks and those are the things we can get to with things like generative ai and other technologies. They are not really in operations yet, but i think those are the directions people will go toward. It has a significant impact. I imagine perhaps a final product goal for your team would be something that would be able to catch maybe we can talk about it in terms of a hurricane or tornado. Catch something on the radar then be able to use Historical Data to understand here are the areas likely to be hit in a climate event like this. Yes . So i imagine as you look at so many of the cataclysmic and often times unexpected or may be underestimated events we have seen, the storms in texas from a few years ago, that felt very out of left field, very unanticipated. I would imagine a lot of what noaa would like to use in terms of a more predictive capacity would be able to warn people, perhaps in a certain area code. You should probably evacuate, there is a chance you are going to see two inches of rainfall over the next two days or Something Like that. The computing that is out there and we can deliver to folks in the field right now, those are ports you are talking about saying we have to evacuate , severe thunderstorms. There is a local forecast responsible for that. Getting those tools, things like ai and clouds and other technology that we talk about, ways we can get more capacity for those folks to get better predictions, then work with the social science folks on how we communicate at the right way. Part of the progression will be how much ai gets to forecasting . That is yet to be seen and one of the things a lot of scientists are looking to do. We will see how we can move from the forecasting that we do two more of a statistical ai part. Theres a lot of opportunity. That is one of the areas a lot of folks are working on. Perfect. You brought up a point i didnt want to clarify for a potential audience member, because i know i needed clarification. Would you mind giving a brief overview of the distinction of ai . Deterministic is the traditional if you are going to describe a system, you are doing the physics calculations. You are taking in the observation. It is a simulation. Statistical is the data that you have. Run it through your data model. Your ai model, however you want to say it. The goal is the same, but different approaches to a problem. There is a mix of that in forecasting. The forecast reports that you see and the models we use are still in that simulation. Dealing with uncertainty, that is part of it. Statistical, but changes the toolset we have. Speaking of those toolsets, you talked quite a bit about how important the local weather stations, people in the field and on the ground, the noaa affiliates are to cultivating the data that will hopefully train more predictive ai and automated algorithms. One fear that i hear being discussed quite a bit in the emerging Technology Field and industry broadly is who is able to get access to emerging systems . We have seen that already have been with ai, to some degree. Kind of a page in the Biden Administration playbook has been to democratize access to a lot of technology. Bringing me to my question for you. Noaa doesnoaa have any sort of agenda items that involve helping equip more local and potentially underfunded meteorological stations or Research Areas across the u. S. With more advanced and potentially Automated Technology . I imagine it would help noaa in the long run if we are talking about gathering a of nationwide and global data. In general let us start generally and work down. In general, that is certainly in initiative. We want to make sure we are doing our work equitably. There are certainly areas any place, there would still be centers. For the local folks that do forecasting, i think we have good coverage. Certainly within the agency, equitable access is important. As we move toward more ai i will put this in here as well. Having access to that and having the thing you need around that. How do i trust this thing . How do i explain or understand it works . From the technology side, the reason we operate the way we do is we build a lot of trust with what we deploy. That has to happen on the ai side. There are specific programs, maybe not specifically for ai, but it is part of it. My office, their job is to interact with the public, whether it is the academic industry, scientists to work on the next operational sweep of models. They are in the process of open sourcing. Really making it accessible. It takes a lot of work to get a model up and running. Someone could download this thing, run in the cloud or wherever that may be and understand how it is being generated and processed. It is to make sure we only get agency view or academic view industry view, but we are getting everybodys view. From my perspective, it gives you a nice baseline for understanding. There are other programs. That is really to get data out in the public so you can interact, radar data or other data. Operational data sets. You can download them or go to whatever cloud provider and download the data for free and interact with it the way you want. There are a couple ways to do that and they are important because then only make our products better, but understanding. Touching on the community element, i know we are nearly out of time. If you cannot really speak to this topic, just ignore me. The Biden Administration, i think the industry as well as going to be prioritizing trustworthy ai systems in terms of what we are going to adopt and hopefully develop. Can you briefly talk about what noaa would like to see from potential vendors or how you guys plan to incorporate more trustworthy and accountable ai . . You dont have too much time, so if you want to give me the cliff notes, go for it. From a Technology Perspective i do not know how far i would go off. What i think we all need to look at is hes things are complex. They are going to be layered on top of each other. You have to have an understanding of not just the system, but the data. There are biases in data. I do not know if we can ever get rid of that. Do i care about the bias are not . If i care, how will i counteract it, work around it . Understand your data, understand the system. Do i understand the service . Do i understand what it is introducing . It becomes part of the design and development of a product. You have to have a conversation, whether it be with a vendor, the community of interest. Have this kind of conversations upfront to understand how it may potentially impact what you returned to get to. That is the start of it. As things progress, the more you use something, you ask questions less and less. Do i trust this . Of course i do. Have conversations upfront so you are thinking about it throughout the lifecycle. A couple of very important pillars will be quite present as we keep moving through generative ai. How are we carrying for data . How are we monitoring it and looking out for the biases . Not developing that with the technology as well. It is good to hear that noaa is on top of that. Normally at this point when we are dwindling in terms of how much time we have, i like to ask anyone i am interviewing, what are you working on within your agency or office right now you are excited about and would love to share with the public . There is a lot of fun stuff going on right now. I think the most exciting thing going on right now it is not just noaa, probably other agencies and companies. The generative ai part, how we are going to use it, what can we do with it, how does it help . Going back to the trustworthy thing, how did we get it off the ground . The important things we are focused on, how do we get the tools . So we can start trying things out and we get to see all the Different Things as they come out of it. That is the fun part for me. We are going down that path. Just talking about ai i get to talk to ai talk about ai a lot. All the things that folks are putting together, all of the work that we do, those are really fun things and i am looking forward to see what comes of it. I am sure everyone as we enter a priority area with Climate Change and the environment, a precious situation. The work is going to be incredibly important in the years to come. Unfortunately, i think that might be all the time we have for a very conducive conversation. But i enjoyed talking to you, inc. You for taking the time to talk to me. For everyone watching, stay tuned for the next segment to learn more about channeling the power of ai. Will come back, i am the senior manager of Event Operations and it is my pleasure to introduce an nvidia Data Scientist for the second part of todays conversation. Thank you for joining us today. Thank you for having me. To kick things off, can you start by telling us a little bit about your role . I would be happy to. I am a Data Scientist. I have been with nvidia for a couple of years. I have been a fan since the 90s, i followed their journey early on. I have worked in the federal space for 11. 5 years, so i have a lot of expertise in that domain as well. I know how hard it is for a federal government folks to work with ai and on top of that, i am a student at university of maryland Baltimore County in computer science. I used to teach as an adjunct, but not right now while i am working on my phd. I have to sleep sometimes. Thank you for the excellent introduction. What are experts anticipating . Nvidia is a key player in the ai industry because we have worked hard to enable hardware and software to accelerate commuting computing. Our founder recognize the importance of computing and the application of cpu to accelerate computing. He recognized the potential and went all in in terms of developing hardware and software to enable the ecosystem. The target audience is mostly developers. We look at every aspect of the ai pipeline. Not just hardware. We do a lot with hardware, you can purchase for marshaling which models. For Large Language Models. We listen to the industry, listen to partners and customers. We listen to the internal development teams. Recognizing every piece of Ai Development is complicated. Anyway we can accelerate that, we do that. Not just the hardware piece that enables specialized reports for matrix operations, which is the foundation of a lot of these models used in generational ai. So dealing with that aspect of it, then the whole software ecosystem. Whether you are talking generational ai or computer vision, we have Software Pipelines that enable every piece of that. How right now, generational ai is having a big moment. Our founder has called it the iphone moment of ai. The trend is, we are looking at in enabling humans to write computer programs by leveraging generated ai. I almost look at it as templates on steroids. You could open up a resume document, you like the format, you modify it for your own taste. Now you can speak to a prompt and write me a resume. It will do a pretty good job of creating structure to that. Those are one of the key factors and technologies. Ai is a team sport. Everything builds on the previous work that has been done in the past. Everything that you see now with generated ai has been built on foundations of research, even from the 1970s and 80s. The first paper that dealt with Large Language Models back i think it was 1980, a paper on recurrent neural networks. After that, you had a paper for neural networks. These were great models that looked at the persistent states of sequences and Large Language Models. It was a necessary step that they could not scale very well. When you had transformer models invented in 2017, that is stuff that enabled the largely which models to scale out. Gpus with a key component that enabled the process. Without that, you never would have been able to scale out the way that you can today. Sorry for getting a little bit too technical. To take it back up a level in how you apply a and how it will enable the industry today, it will help everyone get their work done, as long as you have the appropriate structure in place. Folks that are learning how to program can go and use the copilots or large linkage model generators that can help to write code or proof it for you. It is important industry realizes you can put structures in place to make sure sources are cited. It really is being seen as a productivity enhancement tool. I see ai as being infused every day, aside from generated ai, but also in terms of helping Large Organizations be more efficient. There is lots of work being done in areas of document processing. I worked on a pilot to show how you could extract handwriting recognition from forms, then extracted to be used as data that can be integrated into systems. In the federal government, there is still a lot of paper being done. For folks to work with disability claims, theres lots of medical evidence forms that are processed by hand and saved as pdf files and that is it. Then somebody hand evaluates that, the irs is working on a project to improve the digitization of paper forms. There are still 22 million paper forms hand keyed in every year with an estimated 22 error rate , that is significant. There is a lot ai can do for operational stuff. What trends are you seeing in ai in the Public Sector space . In the Public Sector space, they are more risk adverse, which is understandable. Their main focus is on their applications. They are not profit driven. They are focused on the Customer Service side, the experience. The ai spend in that space has increased, i think at least i am not going to see the percentage because i saw it recently but it is not in my brain right now. I know spending has increased at least over the last several years by 7. 7 billion. We are working with quite a few agencies that are asking questions, engaging in our research. They are working closely with a lot of partners. They are asking questions, doing their own legwork. They are sending team members to training, they are starting to higher Data Scientists and ai experts. They are starting to look with caution at how they can implement that into their processes and things like fraud detection. Then, there is work they are leveraging off of generated ai in terms of being able to have chatbots. They are triaging helpdesk calls , incoming helpdesk calls. That will help them automate responses the next time the Company Calls them with a question. Theres a lot of opportunity to automate policy systems. In the government, you work for social security, you worked there for 34 years and get domain expertise. You know where all of the little regulations are. If you have disability claims and someone comes in with specific questions about whether they are qualified for certain benefits, you normally would have to know what manual, what chapter to look it up in, then you would cite that in preceding the application. Imagine now they can take all of that and have it digitize it to a large language model type application so they could go in and ask questions. Automatically providing a source. So the person that comes in to replace them can have more confidence in their decisions and able to look up the right information and make less mistakes. We are seething trustworthy ai systems is a huge theme in government policy. With the Biden Administration prioritizing missions related to responsible ai creation deployment, how is nvidia responding to the growing demand . Trustworthy ai is very important to nvidia. We publish quite a few Large Language Models and deep learning models on our repository. When you look at any of the models, we publish model parts. They are like data sheets that provide specific information on the type of architecture, the algorithms, where the data comes from and what it is trained on. Then we provide Additional Information on how you can further customize the model. We are clear about the use cases and some of the weaknesses and how you can customize it and modify your own data. We also are very clear about specific use cases. We do not advocate for things like facial tracking. We do not advocate the use of models for things like providing personal identifiable information. Just to dive into this further, how do you think the larger industry will respond . Right now, they are asking a lot of questions. There are some guidelines that are posted. From the government perspective, they will have to make sure whatever Ai Technologies they invest in, they can answer those questions. If the agencies key decisionmaker, there commissioner or secretary need to get in front of congressmen, they have to answer tough questions. How do you know this decision made to approve this claim or deny it or put you on the audit list, how do you know it is accurate . Why was this person chosen . They will need to be able to trace those decisions. Vendors that sell to the government will need to make sure they can also provide those answers. They can talk about where the models come from. A lot of vendors when you go to their website, they are capitalizing on openai and generate of technology. They may or may not give you detailed information on the website about where they get the models from and how decisions are made. But they must be prepared and a lot of them are preparing to answer these questions. When they respond to the requests for information from the federal arena, they will put that detail. How they get so this point and offers along the way. It is still a work in progress, maybe more complicated in the future as models get trained on more and more data. There are other techniques that we leverage. Nvidia has something called guardrails. For generative ai, we tend to use certain Large Language Models then take source documents we need to search. We will focus the information around that topic. Provide the structure that we need. When you are answering questions, you are getting targeted responses and ignoring the noise. For instance, if you have a set of documents that pertains to mortgages or home loans and somebody comes and asks a question about traveling to a city or something not so polite, guardrails in place will step up to say i cannot answer that. It will keep focus on the topic. It is a multimodel type of scenario. You have got your initial large language modeling than additional models to identify the context. There is more to it than that, theres fancy databases under the covers. Essentially what you are doing is turning treating the large languages as numbers. Then you are doing mathematical operations under the covers. It is all of that that fits together to provide enabling guardrails. Even to help remove some of the bias. If you have data that was not trained on something that you needed to do, you need to besides doing the additional repository of data to search against, you can do additional things like retrain the model with domain expertise and keep the content constrained. There are a lot of different techniques. There is a lot of work and attention being done in that area. It is taken very seriously. That is all the time we have today, a big thank you to our expert speakers for participating in todays program and nvidia for underwriting todays conversation. If you missed part one, keep an eye out for the email. We will send out a