Membership Inference Attacks Fueled by Few-Shot Learning to Detect Privacy Leakage and Address Data Integrity

Daniel Jiménez-López, Nuria Rodríguez-Barroso*, M. Victoria Luzón, Javier Del Ser, Francisco Herrera

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

Abstract

Deep learning models have an intrinsic privacy issue as they memorize parts of their training data, creating a privacy leakage. Membership inference attacks (MIAs) exploit this to obtain confidential information about the data used for training, aiming to steal information. They can be repurposed as a measurement of data integrity by inferring whether the data were used to train a machine learning model. While state-of-the-art attacks achieve significant privacy leakage, their requirements render them infeasible, hindering their use as practical tools to assess the magnitude of the privacy risk. Moreover, the most appropriate evaluation metric of MIA, the true positive rate at a low false positive rate, lacks interpretability. We claim that the incorporation of few-shot learning techniques into the MIA field and a suitable qualitative and quantitative privacy evaluation measure should resolve these issues. In this context, our proposal is twofold. We propose a few-shot learning-based MIA, termed the FeS-MIA model, which eases the evaluation of the privacy breach of a deep learning model by significantly reducing the number of resources required for this purpose. Furthermore, we propose an interpretable quantitative and qualitative measure of privacy, referred to as the Log-MIA measure. Jointly, these proposals provide new tools to assess privacy leakages and to ease the evaluation of the training data integrity of deep learning models, i.e., to analyze the privacy breach of a deep learning model. Experiments carried out with MIA over image classification and language modeling tasks, and a comparison to the state of the art, show that our proposals excel in identifying privacy leakages in a deep learning model with little extra information.

Original languageEnglish
Article number43
JournalMachine Learning and Knowledge Extraction
Volume7
Issue number2
DOIs
Publication statusPublished - Jun 2025

Keywords

  • data integrity
  • deep learning
  • few-shot learning
  • membership inference attacks
  • privacy evaluation

Fingerprint

Dive into the research topics of 'Membership Inference Attacks Fueled by Few-Shot Learning to Detect Privacy Leakage and Address Data Integrity'. Together they form a unique fingerprint.

Cite this