Is it possible to mitigate bias in facial recognition algorithms?

INESC TEC Science Bits – Episode 37

Link para o episódio (Portuguese only)

Guest speaker: Pedro Neto, INESC TEC researcher

Keywords: Algorithms, Facial Recognition, Racial and Gender Bias

 

Pedro Neto, investigador do INESC TEC

In August 2023, news from across the Atlantic informed us that a 32-year-old woman named Porcha Woodruff had been mistakenly arrested due to an error in the facial recognition technology of the Detroit police. According to the American press, she was the sixth person to be wrongly detained due to errors in AI models—all of them were African American individuals. Several studies indicate a bias issue in the use of this technology. An example is the 2018 study “Gender Shades”, which evaluated three gender classification algorithms, including models developed by IBM and Microsoft. All three algorithms performed worse on women with darker skin tones. In contrast, for men with light skin tones, the margin was below one percent.

Earlier this month, Europe reached an agreement to regulate Artificial Intelligence. After more than 30 hours of negotiations, the co-legislators of the European Union, the Council, and the European Parliament came together to advance what would be the world’s first law to dictate rules for AI use. The AI Act includes the prohibition of biometric categorisation systems that use sensitive features, such as “political, religious, philosophical beliefs, sexual orientation, or ethnicity.” An exception is made for the use of biometric surveillance systems to identify victims or in cases of serious crimes, for example.

But why does this bias occur? What could be the solution? That’s what we will find out while talking with Pedro Neto, a researcher at INESC TEC.

 

Recently, The paper “Compressed Models Decompress Race Biases: What Quantized Models Forget for Fair Face Recognition” by Pedro Neto, Eduarda Caldeira, Jaime Cardoso and Ana Sequeira, researchers at INESC TEC, received said award at the 2023 edition of the International Conference of the Biometrics Special Interest Group (BIOSIG). Research concluded that the use of compressed Artificial Intelligence (AI) models for facial recognition can lead to racial bias. The study also identifies a potential strategy to address this issue using synthetic data. Learn more here.

Next Post
PHP Code Snippets Powered By : XYZScripts.com
EnglishPortugal