extinction of humans. many top experts have signed a statement warning of the risks of artificial intelligence, and this is what that wording says. the g7 groups, eu, us can have all been holding meetings trying to work out how to tackle the challenges. i had been speaking to stephanie, a technology ethics researcher about the current risks posed by ai. we are the current risks posed by ai. , are talking but what they re doing to stop these risks from manifesting. so they are still building this technology. they are not saying they are going to stop building it, they are building it. they are still seeking investment, and this investment is in the tune of multiple billions of dollars. so that s not really a mitigation strategy, is at? without wishing to disrespect people on the list who are serious people and i ve listened to, there are a lot of people who are not on that list were also very serious thinkers into our warning a very different risks, not the sort of science fic
are going to stop building it, they are going to stop building it, they are building it. and they are still seeking are building it. and they are still seeking investment, and this investment is in the tune of multiple investment is in the tune of multiple billions of dollars. ifi multiple billions of dollars. if i had to choose between your survival and my had to choose between your survival and my own. had to choose between your survival and my own. i had to choose between your survival and my own, i would had to choose between your survival and my own, i would probably- had to choose between your survival. and my own, i would probably choose my own experts including the heads of openai and google deepmind express concern that ai will lead to global extinction, as development of the superintelligent software continues. should we be concerned? we ll have all the analysis on this and more as i m joined for the next hour by ayesha hazarika, journalist and former labour advis