Seemingly trivial differences in training data can skew the judgments of AI programs—and that’s not the only problem with automated decision-making
A new study from researchers at the University of Toronto and the Massachusetts Institute of Technology (MIT) is challenging conventional wisdom on human-computer interaction and reducing bias in AI.The paper, which was published this month in the jo
Artificial intelligence systems are more harsh on people who violated hypothetical rules or policies than judgement calls from humans, a study from MIT researchers shows.
Artificial intelligence fails to match humans in judgment calls and is more prone to issue harsher penalties and punishments for rule breakers, according to a new study from MIT researchers. The finding could have real world implications if AI systems are used to predict the likelihood of a criminal reoffending, which could lead to longer