Last week, Microsoft said it would stop selling software that guesses a person’s mood by looking at their face.
The reason was that it could be discriminatory, it said.
Computer vision software, which is used in self-driving vehicles and facial recognition, has long had issues with errors that come at the expense of women and people of color.
Microsoft’s decision to halt the system entirely is one way of dealing with the problem, but there is another, novel approach that tech firms are exploring: training artificial intelligence (AI) on “synthetic” images to make it less biased.
The idea is a