Microsoft: “Workers are expected to screen out dangerous chatbot answers, but they may have little time to assess an answer’s safety,” they added. “Data workers are often given scant training or supervision, which can result in the introduction of bias.”