Backdoor Poisoning Attack Against Face Spoofing Attack Detection Methods

Shota Iwamatsu1, Koichi Ito1, Takafumi Aoki1
1Graduate School of Information Sciences, Tohoku University, Japan
Asia Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC)

Abstract

Face recognition systems are robust against environmental changes and noise, and thus may be vulnerable to illegal authentication attempts using user face photos, such as spoofing attacks. To prevent such spoofing attacks, it is crucial to discriminate whether the input image is a live user image or a spoofed image prior to the face recognition process. Most existing spoofing attack detection methods utilize deep learning, which necessitates a substantial amount of training data. Consequently, if malicious data is injected into a portion of the training dataset, a specific spoofing attack may be erroneously classified as live, leading to false positives. In this paper, we propose a novel backdoor poisoning attack method to demonstrate the latent threat of backdoor poisoning within face anti-spoofing detection. The proposed method enables certain spoofing attacks to bypass detection by embedding features extracted from the spoofing attack's face image into a live face image without inducing any perceptible visual alterations. Through experiments conducted on public datasets, we demonstrate that the proposed method constitutes a realistic threat to existing spoofing attack detection systems.

Backdoor Poisoning Attack Using De-Identified Face Images

In this section, we describe a backdoor poisoning attack method against face spoofing attack detection. The proposed method aims to induce the target model to misclassify specific spoofing attacks by injecting poisoned data into the training dataset.

Poisoned Data Generation

The poisoned data is generated by embedding features extracted from a specific spoofed face image (trigger face image) into a live face image (cover image) without any perceptible visual alterations. To achieve this, we leverage a face de-identification method that encourages the face features extracted from the generated image to be closer to the features of the trigger image, while ensuring the visual quality remains similar to the cover image. This characteristic makes it extremely difficult for an administrator to detect the poisoned data.

Procedure of Backdoor Poisoning Attack

The attack consists of three phases: 1) Poisoned data generation, 2) Model training on the dataset where a portion of live images is replaced by the poisoned data, and 3) Model evaluation, where the poisoned model exhibits the property of incorrectly classifying the trigger image as "Live" with minimal degradation in overall detection accuracy.

Overview of the proposed backdoor poisoning attack

Figure 1: Overview of the proposed backdoor poisoning attack, which consists of Poisoned data generation, Model training, and Model evaluation.

Experiments and Discussion

In the experiments, we utilize SiW and OULU-NPU datasets. We use Average Classification Error Rate (ACER) for evaluating spoofing attack detection accuracy and Attack Success Rate (ASR) for the accuracy of backdoor poisoning attacks.

Experimental Results

We first evaluate the generated poisoning data qualitatively (Fig. 2). The image generated by the proposed method shows hardly recognizable visual changes, unlike comparative methods.

Example of poisoned images

Figure 2: Example of Live image, Trigger image, and Poisoned image generated by each method. Note that our proposed method's poisoned image is visually indistinguishable from the original live image, making it hard to detect.

Quantitatively (Fig. 3), the experimental results show that for many protocols in both datasets, ACER remains low when the injection rate is less than 60%. While ACER remains low, ASR increases significantly under certain conditions. This indicates that the proposed backdoor poisoning attack can be performed without significantly degrading the overall detection accuracy, making it difficult to detect.

ACER and ASR results

Figure 3: ACER and ASR when varying the poisoned data injection rate for SiW and OULU-NPU. This shows our attack is highly successful (high ASR) without degrading the model's overall performance (stable ACER), making the backdoor difficult to notice.

BibTeX


        @article{Iwamatsu-APSIPA-2025,
          author  = "Iwamatsu, Shota and Ito, Koichi and Aoki, Takafumi",
          title   = "Backdoor Poisoning Attack Against Face Spoofing Attack Detection Methods",
          journal = "Proc. Asia Pacific Signal and Information Processing Association Annual Summit and Conf.",
          year    = "2025",
          month   = oct,
        }