Researchers from MIT and elsewhere have found that machine-learning models trained to mimic human decision-making often suggest harsher judgements than humans would. They found that the way data were gathered and labeled impacts how accurately a model can be trained to judge whether a rule has been violated.