y John Zerilli w ith John Danaher, James Maclaurin, Colin Gavaghan, Alistair Knott, Joy Liddicoat and Merel Noorman. Used with permission of the publisher, MIT Press. Human bias is a mix of hardwired and learned biases, some of which are sensible (such as “you should wash your hands before eating”), and others of which are plainly false (such as “atheists have no morals”). Artificial intelligence likewise suffers from both built-in and learned biases, but the mechanisms that produce AI’s built-in biases are different from the evolutionary ones that produce the psychological heuristics and biases of human reasoners. One group of mechanisms stems from decisions about how practical problems are to be solved in AI. These decisions often incorporate programmers’ sometimes-biased expectations about how the world works. Imagine you’ve been tasked with designing a machine learning system for landlords who want to find good tenants. It’s a perfectly sensible question to ask, but where should you go looking for the data that will answer it? There are many variables you might choose to use in training your system — age, income, sex, current postcode, high school attended, solvency, character, alcohol consumption? Leaving aside variables that are often misreported (like alcohol consumption) or legally prohibited as discriminatory grounds of reasoning (like sex or age), the choices you make are likely to depend at least to some degree on your own beliefs about which things influence the behavior of tenants. Such beliefs will produce bias in the algorithm’s output, particularly if developers omit variables which are actually predictive of being a good tenant, and so harm individuals who would otherwise make good tenants but won’t be identified as such.