4.7 Article

Demystifying Membership Inference Attacks in Machine Learning as a Service

Journal

IEEE TRANSACTIONS ON SERVICES COMPUTING
Volume 14, Issue 6, Pages 2073-2089

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TSC.2019.2897554

Keywords

Training; Cancer; Machine learning; Data models; Predictive models; Data privacy; Computational modeling; Membership inference; federated learning; data privacy

Funding

  1. US National Science Foundation (NSF) [SaTC 1564097]
  2. US National Science Foundation [1547102]
  3. Georgia Tech IISP grant
  4. Div Of Civil, Mechanical, & Manufact Inn
  5. Directorate For Engineering [1547102] Funding Source: National Science Foundation

Ask authors/readers for more resources

This paper provides a comprehensive study on membership inference attacks, presenting a generalized formulation of black-box membership inference attack models and discussing the impact of model choice on model vulnerability. The vulnerability to membership inference is data-driven and the attack models are largely transferable, according to formal analysis and empirical evidence.
Membership inference attacks seek to infer membership of individual training instances of a model to which an adversary has black-box access through a machine learning-as-a-service API. In providing an in-depth characterization of membership privacy risks against machine learning models, this paper presents a comprehensive study towards demystifying membership inference attacks from two complimentary perspectives. First, we provide a generalized formulation of the development of a black-box membership inference attack model. Second, we characterize the importance of model choice on model vulnerability through a systematic evaluation of a variety of machine learning models and model combinations using multiple datasets. Through formal analysis and empirical evidence from extensive experimentation, we characterize under what conditions a model may be vulnerable to such black-box membership inference attacks. We show that membership inference vulnerability is data-driven and corresponding attack models are largely transferable. Though different model types display different vulnerabilities to membership inference, so do different datasets. Our empirical results additionally show that (1) using the type of target model under attack within the attack model may not increase attack effectiveness and (2) collaborative learning exposes vulnerabilities to membership inference risks when the adversary is a participant. We also discuss countermeasure and mitigation strategies.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available