Google Reclaiming Identity Labels To Improve Machine Learning Abuse Filters

Train a machine learning model to detect ‘toxic’ words in online comments and it comes to some depressing conclusions.

During Google’s ongoing work on Perspective, an API that uses machine learning to detect abuse and harassment online, engineers found the models identified sentences that use words such as ‘gay’, ‘lesbian’ or ‘transgender’ as abusive.

“Unfortunately what happens when we give it this input – I’m a proud gay person – is the model predicts this is toxic,” said Google AI senior software engineer Ben Hutchinson at an ethics of data science conference at the University of Sydney last week. READ MORE ON: COMPUTER WORLD

images (1).jpg
CultureYusra Hamid