Research concluded that the use of compressed Artificial Intelligence (AI) models for facial recognition can lead to racial bias. The study, which also identifies a potential strategy to address this issue using synthetic data, received a Best Paper Award at the BIOSIG international conference.
The paper “Compressed Models Decompress Race Biases: What Quantized Models Forget for Fair Face Recognition” by Pedro Neto, Eduarda Caldeira, Jaime Cardoso and Ana Sequeira, researchers at INESC TEC, received said award at the 2023 edition of the International Conference of the Biometrics Special Interest Group (BIOSIG). The team studied the impact of using compressed versions of facial recognition algorithms and realised that “the performance lag stemming from compression is not equal among all ethnic groups”.
In other words, this analysis allowed the researchers to understand that the compressed models demonstrate a tendency to more easily “forget” elements related to certain ethnic groups. “This is prejudicial to certain ethnicities. In addition, compression leads to a higher error rate of the algorithm during the verification of intra-ethnic class identity,”, said Pedro Neto.
But if this lag occurs, why are compressed versions of the algorithms considered instead of the original ones? The INESC TEC researcher explained that “deep learning models are increasingly complex, and very demanding in terms of computational power and memory. Hence, compression is crucial to use these AI models on systems with computational and memory constraints”. In other words: to use the algorithms on smaller capacity machines, it is necessary to convert them into lighter and less demanding versions. “This process presents, as expected, a trade-off between efficiency and accuracy”, added the researcher.
Hence the need to analyse the original and compressed versions to understand the implications of conversion. The study concluded that compression could lead to racial bias. However, this team of INESC TEC researchers is already working on a possible solution. “In this work, we were able to identify the problem and detect a potential mitigation strategy”, said Pedro Neto.
This strategy may involve, according to the researcher, the use of a synthetic dataset of faces. “Even if it is not balanced in terms of ethnicities, it promotes a resulting model with a fairer and more balanced performance between the different groups. In the future, we would like to continue to make advances in these mitigation strategies and propose new compression algorithms that promote fairness between the different groups”, he mentioned.
This recognition achieved at the international BIOSIG conference, with a Best Paper Award, reinforces the will advance a solution to the compression of facial recognition algorithms. “To know that our work stood out among many other amazing initiatives was a rewarding experience that allows us to validate the efforts we have dedicated to this line of research”, concluded the researcher.
BIOSIG took place in September, in Darmstadt, Germany.