Google has announced plans to deal with AI biases through machine unlearning.
Fabian Pedregosa and Eleni Triantafillou, Research Scientists at Google raised concerns with the widespread use of deep neural network models, which, they argue require caution because of the potential risks associated with them.
“The widespread use of deep neural network models requires caution: as guided by Google’s AI Principles, we seek to develop AI technologies responsibly by understanding and mitigating potential risks, such as the propagation and amplification of unfair biases and protecting user privacy,” they said.
Google has team teamed up with academic and industrial researchers to organize the first Machine Unlearning Challenge which will be held as part of the NeurIPS 2023 Competition Track. The competition will be hosted on Kaggle and run between mid-July 2023 and mid-September 2023.
“The goal of the competition is twofold. First, by unifying and standardizing the evaluation metrics for unlearning, we hope to identify the strengths and weaknesses of different algorithms through apples-to-apples comparisons. Second, by opening this competition to everyone, we hope to foster novel solutions and shed light on open challenges and opportunities,” Fabian Pedregosa and Eleni Triantafillou, Research Scientists at Google said in a blog post.
The competition will be hosted on Kaggle, and submissions will be automatically scored in terms of both forgetting quality and model utility. We hope that this competition will help advance the state of the art in machine unlearning and encourage the development of efficient, effective, and ethical unlearning algorithms,” Fabian Pedregosa and Eleni Triantafillou, Research Scientists at Google said in a blog post.
Google also announced the availability of the starting kit to provide a foundation for participants to build and test their unlearning models on a toy dataset.