Most of us have an innate respect for code. Algorithms have proven to be more efficient than people at certain tasks, so we’ve come to revere the machine for its accuracy and neutrality.
The trouble is, when we use an algorithm to make a prediction, we are not always aware of what calculations it made to reach its conclusion and what biases are built into it, and by putting our faith in algorithms we enable the systematic reinforcement of these biases.
Data scientist Cathy O’Neil defines an algorithm as “using historical information to make a prediction about the future.” Machine learning works by building off of the data it’s provided, so if an algorithm is fed a skewed data set, it creates a self-perpetuating pattern of bias.