for white—collar work. do you think there actually is a worry that we are giving more control over to automated systems because they are just faster and better than us — and then every so often something goes wrong, a bit like the financial crisis. sure, i think that's a legitimate concern. it's really important to recognise that these systems are fallible. they are not purveyors of the truth, they are not omnipotent, they are not gods, right? they are just as good as the training data and as it turns out, you can already see with chatgpt, they have this propensity to lie, to hallucinate, to make some stuff up. they are not infallible. by giving these machines undue sentience or capabilities, i think we are actually stripping away our own agency, and that is that these systems are still very much in the control of organisations and people. and over the next few years, months and years, we have a chance to think clearly and strategically about how they are going to be