Google’s Inclusive Images Competition on Kaggle aims to encourage the development of less biased AI image classification models.
Bias is a well-established problem in artificial intelligence (AI): models trained on unrepresentative datasets tend to be impartial. It’s a tougher challenge to solve than you might think, particularly in image classification tasks where racial, societal, and ethnic prejudices frequently rear their ugly heads.
In a crowdsourced attempt combat the problem, Google in September partnered with NeurIPS competition track to launch the Inclusive Images Competition, which challenged teams to use Open Images — a publicly available dataset of 900 labeled images sampled from North America and Europe — to train an AI system evaluated on photos collected from regions around the world. It’s hosted on Kaggle, Google’s data science and machine learning community portal.
Tulsee Doshi, a product manager at Google AI, gave a progress update on Monday morning during a presentation on algorithmic fairness.
“[Image classification] performance… has [been] improving drastically … over the last few years … [and] has almost surpassed human performance [on some datasets]” Doshi said. “[But we wanted to] see how well the models [did] on real-world data.”
Toward that end, Google AI scientists set a pretrained Inception v3 model loose on the Open Images dataset.
Home
United States
USA — software Google’s Inclusive Images Competition spurs development of less biased image classification AI