Researchers from MIT and elsewhere have found that machine-learning models trained to mimic human decision-making often suggest harsher judgements than humans would. They found that the way data were gathered and labeled impacts how accurately a model can be trained to judge whether a rule has been violated.
Sharing data can often enable compelling applications and analytics. However, more often than not, valuable datasets contain information of sensitive nature, and thus sharing them can endanger the privacy of users and organizations.
A possible alternative gaining momentum in the research communit