this week, we re taking a deep dive into artificial intelligence, and how it s transforming the world around us. yeah, that includes in healthcare, where we meet the ai helping radiologists to diagnose cancer. you can see these little white dots. the ai is highly suspicious. and in the fast moving game of ai artwork, who owns what? and can artists protect their work? for some time, artificial intelligence has been all around us. you might not have noticed it, but your video streaming services, social media feeds, the maps on your smart phones, they ve all been steadily improving their performance because the computers behind them have been learning. and then last year, something important happened. yeah. ai got human or at least it felt like it did. companies like google and open ai started showing off stunning photorealistic images like these, all created by ai from short text descriptions. and then ai started having conversations with us. they were starting to generate st
because there is no silver bullet that will fix it, but you have to kind of start building layers of resilience around society to navigate this kind of new era of ai. i mean, you can imagine a world where people are fooled by ai generated images, but i can also imagine a world where, if something is true, people just won t believe it. and so someone who that image affects, maybe a politician or a leader, canjust say, well, that is fake news , and even though it s genuine, because there is so much doubt cast throughout society. you ve hit the nail on the head. that is a phenomenon known as the liar s dividend. because it is not only that every piece of content or text can now be generated with al so you can synthesise or fake anything, it is also the understanding now that everything can be created by ai that undermines the integrity of everything that is authentic. but should there be one sort of international way that things are done,
and somebody who grapples with this a lot of the time is nina schick, who has written books on deepfakes. nina, we can t even possibly begin to know what s real or not now how on earth do we deal with this? that is an existential question for society, because i think this is the last moment, if you will, in the internet s history where the majority of data and information content we see online is not generated or created by artificial intelligence. because we are seeing this new field of artificial intelligence, so called generative ai, that can create content and information in every single digital format. and the use cases of generative ai are so profound, increasingly we will start to be engaging ai made content, it is going to become ubiquitous. that seems like a pretty unsolvable issue. you have to take a cybersecurity approach because there is no silver bullet that will fix it, but you have to kind of start building layers of resilience around society to navigate this kind of n
by artificial intelligence. because we are seeing this new field of artificial intelligence, so called generative ai, that can create content and information in every single digital format. and the use cases of generative ai are so profound, increasingly we will start to be engaging with ai made content, it is going to become ubiquitous. that seems like a pretty unsolvable issue. you have to take a cybersecurity approach because there is no silver bullet that will fix it, but you have to kind of start building layers of resilience around society to navigate this kind of new era of ai. i mean, you can imagine a world where people are fooled by ai generated images, but i can also imagine a world where, if something is true, people just won t believe it. and so someone who that image affects, maybe a politician or a leader, canjust say, well, that is fake news , and even though it s genuine, because there is so much doubt cast throughout society. you ve hit the nail on the head.
with ai made content, it is going to become ubiquitous. that seems like a pretty unsolvable issue. you have to take a cybersecurity approach because there is no silver bullet that will fix it, but you have to kind of start building layers of resilience around society to navigate this kind of new era of ai. i mean, you can imagine a world where people are fooled by ai generated images, but i can also imagine a world where, if something is true, people just won t believe it. and so someone who that image affects, maybe a politician or a leader, can just say, well, that is fake news , and even though it s genuine, because there is so much doubt cast throughout society. you ve hit the nail on the head. that is a phenomenon known as the liar s dividend. because it is not only that every piece of content or text can now be generated with also you can synthesise or fake anything, it is also the understanding now that everything can be created by ai that undermines the integrity