LabellessFace: Fair Metric Learning for Face Recognition without Attribute Labels

1Shizuoka University, 2RIKEN AIP,3Tohoku University
IEEE International Joint Conference on Biometrics (IJCB 2024)
*Indicates Equal Contribution

Abstract

Demographic bias is one of the major challenges for face recognition systems. The majority of existing studies on demographic biases are heavily dependent on specific demographic groups or demographic classifier, making it difficult to address performance for unrecognised groups. This paper introduces “LabellessFace," a novel framework that improves demographic bias in face recognition without requiring demographic group labeling typically required for fairness considerations. We propose a novel fairness enhancement metric called the class favoritism level, which assesses the extent of favoritism towards specific classes across the dataset. Leveraging this metric, we introduce the fair class margin penalty, an extension of existing marginbased metric learning. This method dynamically adjusts learning parameters based on class favoritism levels, promoting fairness across all attributes. By treating each class as an individual in facial recognition systems, we facilitate learning that minimizes biases in authentication accuracy among individuals. Comprehensive experiments have demonstrated that our proposed method is effective for enhancing fairness while maintaining authentication accuracy.

LabellessFace Framework

LabellessFace framework is based on existing softmax-based metric learning. In addition to softmax-based metric learning, it dynamically sets different margins for each class based on class favoritism level while progressing the training through the fair class margin penalty process, and updates the class favoritism level at the end of each epoch. Here, the class favoritism level is determined based on how much the recognition accuracy for each individual deviates from the overall average using the training samples.

Overview of LabellessFace

Fair Class Margin Penalty

In this proposal, to minimize the bias in individual authentication accuracy, a margin coefficient $d_c$ is added to the basic ArcFace loss function to minimize the bias in individual authentication accuracy. In this case, $d_c$ takes different values for each class and is determined at the end of each epoch based on the class favoritism level $f_c$, which indicates the extent to which each class $c \in \mathcal{C}$ is favored among all classes.

Fair Class Margin Penalty

Class Favoritism Level Calculation

The class favoritism level $f_c$ for class $c$ is calculated at the end of each epoch using the training data. Classes identified with relatively high confidence are calculated with a higher favoritism level, while those with lower confidence receive a lower level, which is then reflected in the calculation of the coefficient $d_c$ of fair class margin penalty for the next epoch.

Class Favoritism Level Calculation

Evaluation

For model training, the BUPT-Balancedface dataset, which has an equal proportion of races, was used. BUPT-Balancedface contains 7,000 classes of four races: African, Asian, Caucasian, and Indian, with racial labeling provided for each data point. In the evaluation, the Labeled Faces in the Wild (LFW) and Racial Faces in the Wild (RFW) datasets were used. We utilized ResNet34 as a face recognition model architecture.

Table: The performance and fairness evaluation results evaluated on LFW dataset. STD, Gini, SER were assessed when users were divided according to LFW 26 attributes.

result

Slide

Poster

Citation


        @inproceedings{ohki2024labellessface,
          title={LabellessFace: Fair Metric Learning for Face Recognition without Attribute Labels},
          author={Ohki, Tetsushi and Sato, Yuya and Nishigaki, Masakatsu and Ito, Koichi},
          booktitle={IEEE International Joint Conference on Biometrics (IJCB 2024)},
          year={2024}
        }
      

Acknowledgement

This work was supported in part by JSPS KAKENHI JP 23K28085, and JST Moonshot R&D Grant Number JPMJMS2215.