Neil lawrence, welcome to hardtalk. Thank you very much for having me. It's a great pleasure to have you. Now, you are a computer scientist, but it seems to me, as you've journeyed deep into the potentiality of artificial intelligence, you've also thought a great deal about human intelligence and what is so very special and unique about us humans. Can you try to put that into words? yeah, i think, for me, what we've seen with a lot of the artificial intelligence debate has been a sort of naturally narcissistic in tendency to think about our intelligence. And what i think it does is offers the opportunity to introspect about our intelligence, to stand in a different place, to look at a different type of information processing, that that's done by a computer, and use our understanding of that, which we built and created, so we understand it, to look back and think about what's special about us. Is intelligence the right word to use when it comes to discussing what machines can do in this digital, data—driven era? it's a really good question, and my mind changes a little bit about it either way. I think the problem with the word is we tend to think of intelligence as something very particular to us. And the nature of machine intelligence is nothing to do with us. So, if we want to use the word intelligence for machines, then i think there's a question about, well, should we use the word intelligence for our ecology, which is also an information processing ecosystem? and we don't tend to do that unless we're sort of doing animistic practices. So i think that's a really important question, goes to the heart of the debate, because i think people are confused by its use when we're talking about machines. You have caused quite a stir with this book, the atomic human. It's subtitled understanding ourselves in the age of ai, and what i find so interesting about it, and we're going to get deep into ai in a minute, butjust to stick with human beings for a second, is that you talk about, quote, the immense cognitive power of the human brain, and yet you also point out that the human brain is utterly inadequate to compete with the computer when it comes to conveying, processing information. Try and explain to me how you can both describe us as having this immense cognitive ability, and yet, in some ways, be quite useless. Yeah, it's to do with the physics. So, our brain is using neurones which use electricity to compute. And it's an extraordinary, extraordinary entity. I mean, i don't want to call it a machine. It's beyond our best understanding at the moment. And it uses electricity. Now, that comes from our evolution of animals, which first used electricity to move, and a multicellular organism, which has each cell, which probably on its own is thinking, i'm a little cell — well, it's not thinking. . . They become coordinated through these electrical signals. Now, when i communicate with another human, i have to use sound waves. Now, the speed of sound is, roughly speaking, a million times slower than the speed of light. And that means that the rate of communication i can do with another human is roughly a million times slower. And you break it down to bits of information. And you quantify, in any given minute of human speech, how many bits of information we can convey to another. And you compare that with how a computer talking to another computer can convey, immeasurably, more bits of information in that same minute. So, what roughly is the ratio? well, depending on whether you're talking about ten gigabit or one gigabit internet, but basically 300 million times faster. Right! which is why, when it comes to our discussion today, which is going to be about big data and algorithms and how one can use immense, unimaginably large data sets to perform all sorts of tasks through algorithms, the human being can't compete. Well, it's interesting. I wouldn't say we can't compete, but we can be easily manipulated by something that we wouldn't think of as intelligent. There's certainly tasks that we do a lot better than the machine, but the machine is doing many things in a very different way. So, even when we think, oh, it's doing something relatively simple, likejust looking at our data and choosing whether or not we like certain posts, classic machine learning algorithms, it can manipulate us because it sees us in a very different way to the way we see each other. Again, philosophically speaking, much of the way in which human beings convey meaning to each other isn't based on words at all. It's based on many different forms of expression. You could think of art and the way we convey meaning through paint or music. One could think of the facial expressions that, through socialisation, we've learned to invest with great meaning. Are those powers that are really important, do you think, to the way we shape the next century, living alongside these machine—learning devices? well, i want to be careful about living alongside, because i don't quite know what we mean by that. They should be tools that are operating under our guidance, but even that is a difficult question to answer, what does that look like? because they are making decisions that are often beyond our conception. And those decisions could be things that go in ways that are not the decisions we would make if we brought our context to it. And i think the point you made about there's so much to us that is different from the machine in terms of like, we're here physically. I mean, how much more, how much different is it for you, even if you're not doing this on zoom to be together in the studio? you're right. Fundamentally different. And so, how much different is it when the thing you're discussing with doesn't even have a bodily form? i'm very aware that you are different from some other leaders in this sort of tech debate, because you do not take what i might call a sort of doomsday approach to the future. There are many eminent thinkers, like geoff hinton, who i imagine you probably know, who used to work at google. He quit. He's a professor of computer science. He believes that al represents an existential threat to humanity. Elon musk has said that he believes that agi, as he puts it, that is artificial general intelligence, will supersede the intelligence of all of human beings within, he says, a year. You don't seem to share these views and these fears. Well, that would require a rankable definition of intelligence, and we don't have that. Intelligence is a multifaceted thing. We can't simply rank it in that way. In fact, the idea that we can rank intelligence is fundamentally eugenic. It comes from the eugenicists, and the term general intelligence comes from eugenics. So i think we have to be very careful with that, because it's the same form of thinking that the eugenicists participated in when they said, oh, we could breed more intelligent people by measuring the intelligence of people. Intelligence is nothing like that. So, the idea of an artificial general intelligence would be like, what's the artificial general vehicle? what's the best vehicle for all tasks? it does everything better than all other vehicles. This is a ridiculous notion. The vehicle you would like to use is dependent on the circumstance, and in intelligence, unless we define the context, the right form of intelligence can't be defined. So the word is more like beauty, in that we all have a sense of what it is, but actually we can't say much more about it other than our sense. We can't define it from first principles. What matters most to you in the here and now of the development and deployment of artificial intelligence? i think we're in a very difficult situation, where people are being told that this technology is too complex for them to understand, that they shouldn't be worrying about it, that only certain people, only certain companies, should be controlling this technology, and i find that horrific. You mean you are seeing the rise of sort of monopolistic tech oligarchs who are, in a sense, telling the public, don't worry about it, we'll offer you services that you won't understand, but believe us, they're in your best interest? and they're gaining mindshare with governments. Of course, they're only doing it in their. . . That's their objective to, as i think satya nadella said, capture the ai market. But we know that the history of that, this is one of the most important changes that has occurred, certainly in 500 years since the printing press, but we can go back to 5,500 years, the development of writing. We are talking about a fundamental change in which the way we share information between each other, and we're talking about a small group of people who are saying, you regular people, you don't understand this, let us look after it for you, and there is no track record of those entities doing this well. If we look at the history, these companies, they do not have a good understanding of how society works. They do not have a good understanding of the things that are most important to people, such as health, social care, education, security. I'm struggling to square what you've just said with the fact that, for three years, very recently, you went off to work for amazon, one of the biggest ai and big data corporations in the world. Why do you struggle to square that? well, you've just described to me your deep mistrust of the agenda of. . . You didn't name amazon, but when you talked about the technocracy, i'm imagining amazon would be up there as one of the key members of the technocracy. Well, i think increasingly, because they're increasingly moving away from their core business, which is selling directly to customers and putting customers first, where those customers are directly aligned with the purchase of a product, and they're more getting in the business of selling to other businesses or advertising, and as these businesses. . . If i may interrupt, i mean, they are one of the greatest owners of data on individual consumers in the whole world today. I mean, again, i'mjust struggling to see how you can be so worried and fearful of these biggest global tech companies, and yet go off and work for one. Because you have to understand these systems if you're going to understand how to intervene, and the first level of understanding, and one of the areas i worked in predominantly was amazon's supply chain and the possibilities for benefit through better organisation of supply chain, particularly when you start realising, well, supply chain of goods is similar to supply chain of availability of staff in hospitals. So, the same sets of algorithms that allow you to efficiently move goods around the world could also allow you to more efficiently deploy nurses and doctors. So, if you want to understand the problems that come up, you have to work with these companies. I've also worked with oil companies, you know, and understood what's going on in terms of how we extract oil. But i'm also worried about the climate. I think it's not a healthy world where we totally disassociate ourselves from these companies. There's very many well—meaning people in these businesses. Well, do you think politicians, who, in oursystem, ultimately make the rules that the big tech companies have to abide by, do you think politicians have been naive in the way they've approached this challenge of how best to control, regulate, make accountable big tech? i think it's, part of being a politician is an extremely difficultjob, and i think a lot of it. . . We're so lucky in the uk that we have really good advice structures to support the politicians. But, of course, this isn'tjust a uk issue. It isn't. And i'm just, again, looking at something you said, a sort of siren statement of yours — society is indulgent of companies, that is the big tech companies that are using ai, in the way that parents are indulgent of children. We allow companies to do things that we would never let other public institutions get away with. You know, as you look around the world, do you see that still being a profound problem? that's a deep problem, isn't it? i mean, it's extraordinary, when companies do the most basic of things, like, oh, we've opened a research centre in nairobi, they expect us to all cheer them as if they've suddenly become the most saintly thing, like a child, and it's because we have a somehow a moral exception around companies. We don't expect them to behave like we would expect a civil institution to behave. But we also want them to generate economic wealth. I mean, you know, if you imagine a politician in the united states or the uk, they don't want to impose a set of rules on a company at the cutting edge of ai, which that company might well argue would dampen down their desire to innovate, to be entrepreneurial, to push the boundaries of the technology, you know, because then they would say, we might go work somewhere else. Absolutely. But there's a sort of confusion about this space. This is one of these subtleties. So, the uk's policy is actually pro—innovation and one i greatly support. But pro—innovation isn't like pro very large companies that dominate the ecosystem and prevent small players entering. The way market economics works is if i'm a doctor or a teacher and i have a good idea about how to use al to benefit patients or my colleagues, that i should be able to enter the market with that idea, and that's not the situation. We've got a situation where digital markets are dominated by a few very large players, and government has already recognised this. So, one of the sort of extraordinary things that happened just before the election came to be, 24th of may, the digital markets competition and consumer act got passed, almost unreported in the wider press. This is one of the most important acts because it's trying to address that problem, which is a problem of how market economics works in the digitised era, with these information passing. . . As you point out, we do have laws and regulations. The eu has begun implementation of its ai act as well. We see that, but you go beyond that. You've come up with this idea of data trusts, that people's data, which, in a sense, is one of the most important things they own in this digital world, it should be handled in a way which they are fully aware of. There is transparency. They know what happens to their data. There's accountability for their data. Is this world of data trusts ever going to happen? i hope so. It's a difficult thing to introduce because it involves educating people about how their data is being used. And it undermines the commercial principles on which so many big tech companies are making gazillions of dollars�* worth of profit. It undermines data monopolies. I mean, i think that the analogy i think of most often is it's the feudal system we have with data right now, where the data. . . So, in fact, in data protection law, we are data subjects, and the data controller is kind of like a feudal lord who controls that data on our behalf. But there's an asymmetry of power and knowledge that means that we don't understand when they're making errors, and we struggle to hold them to account. So, the notion of a data trust is that we sort of democratise that, by having people, trained people, professionals, who look after that data on our behalf and do that negotiation with those companies and have what we call undivided loyalty to the data subjects. As we deepen and develop our ai capabilities, do you actually want to see the huge global tech corporates, the dominant players in this field, certainly in the western world, do you want to see them broken up? i think that's not something that i understand whether it's the right action. I think what is much more interesting is the recent legislation that's come in, which has this notion of strategic market status. So, strategic market status is, i'm not a deep expert on it, but it's a situation where if a company is dominating in one digital market sector, they become subject to particular regulatory provisions. Now, it might not be the right solution to break them up, and who knows? because, actually, you have to be. . . What we have to do is support our regulators in getting an understanding of what the right interventions are, an understanding of how these things are affecting the markets, which can reflect the speed at which these companies are deploying new technologies. Most of this conversation has essentially been in the context of a sort of western world and a democratic form of governance. Now, sam altman, ceo of 0penai, he posits a sort of binary conflict going forward, which is, as he puts it, between democratic ai and authoritarian ai, and he worries that, from vladimir putin in russia to xijinping in china, there are authoritarian leaders who are piling state power and resources into ai development, which may well outpace and overwhelm the private sector in the western world. Do you have those same fears or not? it's a really important question, but the answer that's being given, that we fight autocracy of the state with autocracy of companies, is not a very good one. So, the idea that the way we fight fire is with fire of our own, that is taking things, decisions that should be given to the people, shared, discussed, co—created solutions, we take that away and we give it to large companies to decide, is a very depressing prospect, because if you give me the choice between states that are run by corporations or states that are run by governments, autocratically, i'm not sure which i prefer. Isn't there also a complacency to the argument, because it sort of suggests that democracies and state power in democracies will handle ai with great responsibility, when often times we can't be sure of that at all. I mean, i'mjust thinking of one particular example. It's just come out of argentina, where the new government of presidentjavier milei has created what he calls an ai security unit, which the argentinian state says will use machine learning, and this is the quote, to analyse crime data and predict future crime. Now, you can imagine all sorts of ways in which that might be abused. Yeah, i mean, it's a. . . It's a, it's a deep worry. And of course, our data protection legislation that exists today is not initially inspired by what companies might do with data, but by people's fears of what states might do with data, coming out of what was going on in east germany and the stasi. So, these laws are there to protect us, notjust from large companies, but from our own governments, but the situation we have at the moment is that our government, in trying to deliver services in the most difficult areas in society, the areas, so—called wicked problems, health, education, social care, these challenging problems, is somehow stymied in its deployment of these technologies, because they can't bring data together in which, in the same ways that these companies have. Indeed, in the pandemic, we saw situations where these companies had data about movement of people within the wider population, which our governments had to buy back to understand the way in which the lockdowns were affecting the progress of the pandemic and us economically. Perhaps to some counter—intuitively, you seem to be saying that some of the clearest benefits that can be gleaned from artificial intelligence at the moment lie in those countries which, in many ways, are least economically developed, least infrastructure developed, and you look particularly at africa. Why do you believe that in africa there is so much potential for al to make a difference now? because when we go to africa, and i work with colleagues on the ground, you can engage people instead of corporations. So we're working directly with farmers, directly with health centres, building systems that go all the way from the farmer's field to the ministry of agriculture, looking at what people's problems are and trying to tailor solutions for them. Now, i'm not saying that all it work in africa is of that nature. But you have a capable population of people empowered by a very fast mobile phone network. The availability of smartphones for around $60, which is about the price of a bicycle. So, out of the reach of most people, but within reach of many people. And this is transformational when you think about those wicked problems of health, social care and education, that africa desperately needs support on. I said at the beginning that, despite all of our excitement about the massive and rapid development of ai, that what matters most isn't the technology, it is still the humans and what sits here, in between our ears. If you think about that, does that make, and with all of your thinking about our human intelligence, does that make you optimistic about the way we are going to use ai? i am optimistic because, despite all these problems, despite all the people who are trying to make a buck or getting in the way or don't understand the systems, in this country in particular, there are many good people who have spent decades not trying to earn fortunes at google, but working hard to get the right understanding in place, and that understanding is there with our government. And, overtime, people will notice it. But who has the power — them or the people at google? well, that's the difficulty, isn't it? government likes to celebrate their relationship with big tech. They see it as a way of demonstrating that they're up with the cutting edge. But the people they have around the table, i know full well are totally disempowered in their own companies. These are the public policy leads from the uk, people who don't even get a look in in the boardrooms in silicon valley, where the real decisions in silicon valley, where the real decisions have been made, and i've been in those rooms and i've seen how those decisions are made. These are good companies, in many respects, and they can bring many benefits, but we must not lose control to them about what we would like for our future, and, as the uk, we have the possibility to lead the world in what that future looks like in a way that is not just about techno solutionism in silicon valley, but is about a better society for different places all across the world. Neil lawrence, it's been fascinating. Thank you forjoining me on hardtalk. Thank you, stephen. Thank you very much. Hello. The weather's looking fairly promising for most of us on wednesday, with some prolonged spells of sunshine, but it won't be dry everywhere. In fact, farfrom it. We are expecting a few showers, and in the morning across one or two areas it actually could be pretty wet. Let me show you the big picture. Here's the forecast for wednesday. A couple of weather fronts bringing showers into northern ireland and western scotland, and this one here also, in the north of england, wales and stretching to southwestern areas. But whether you've got the sunshine or the rain in the morning, it's actually going to be quite warm, first thing — 17 in london, about 15 around merseyside and similar values there for glasgow and edinburgh. So showers reaching northern ireland and western scotland, and here's that weather front stretching from northern england through the lakes, into lancashire, wales, maybe the west midlands, could be some spits and spots further south, too. So, for a time, some of us will catch some rain here, showers there moving into scotland, and big temperature contrasts. On wednesday, eastern england, east anglia and the south east, mid—high 20s. 0ut towards the west, it's a lot fresher — temperatures of around 18 in belfast and for glasgow. Now this is fresher air that's arriving off the atlantic. In fact, the fresher air will spread right across the country during the course of thursday. So no longer is it going to be so warm in the east and the south east. Temperatures here will be closer to say, around 22 or 23 celsius, out in the west around 18 degrees, with that atlantic breeze coming in. Just a few showers for scotland. Here's a look at the end of the week. High pressure is building off the atlantic, this azores high. Showers are kept at bay across france, far away. So it's a day of light winds and sunny spells. Really a very pleasant day, and for some of us, a perfect summer's day. Neither too hot and not particularly cool either, and the temperatures will be around the high teens across western areas, maybe nudging up to 2a in london for the end of the week. So that's friday. How about the weekend? well, the high pressure is still with us. Weather fronts are trying to push in, but it's a substantial high so it's keeping things dry. Maybe the weather going downhill a little bit, with a few showers towards the south and towards the west, as we head into next week. So here's the outlook, then. Friday and the weekend looking pretty decent for many of us, with spells of sunshine looking quite warm, too, nd then into september, it does look as though and then into september, it does look as though we could catch i or 2 showers. That's it for me. Bye— bye. Live from london, this is bbc news. Israeli security forces say they've carried out an operation in the north and west of the occupied west bank. Reports say seven palestinians are dead. Donald trump has accused the us department ofjustice of trying to resurrect a dead witch hunt, of trying to resurrect a dead witch—hunt, after it filed revised election interference charges against him. Sir keir starmer is visiting berlin to discuss a new uk—germany pact. Spacex's polaris dawn, a private space mission that aims to complete the first—ever civilian spacewalk — has been delayed again, for the third time in a row. Hello, and welcome to bbc news. I'm lukwesa burak. But we start with breaking news from the occupied west bank. Israeli security forces say they've carried out an operation in the north and west of the occupied west bank. The palestinian red crescent says at least seven people have been killed — five of them in drone strikes nearjenin and in thejordan valley. There was also military activity further south around tulkarm, involving the use of bulldozers — but details are scarce.