4.7 Article

An optimal (ε, δ)-differentially private learning of distributed deep fuzzy models

Journal

INFORMATION SCIENCES
Volume 546, Issue -, Pages 87-120

Publisher

ELSEVIER SCIENCE INC
DOI: 10.1016/j.ins.2020.07.044

Keywords

Fuzzy machine learning; Differential privacy; Variational Bayes

Funding

  1. EU Horizon 2020 [826278]
  2. Austrian Research Promotion Agency (FFG) [873979]
  3. Austrian Ministry for Transport, Innovation and Technology
  4. Federal Ministry for Digital and Economic Affairs
  5. Province of Upper Austria

Ask authors/readers for more resources

This study introduces a privacy-preserving framework for distributed deep fuzzy learning under the differential privacy framework. By learning local deep fuzzy models through variational Bayesian inference, noise magnitude can be reduced to increase utility, leveraging the robustness feature of rule-based fuzzy systems. The proposed architecture separates private local training data from globally shared data using a privacy wall, utilizing noise adding mechanisms to achieve differential privacy and prevent adversaries from accessing the training data directly.
This study introduces a privacy-preserving framework for distributed deep fuzzy learning. Assuming training data as private, the problem of learning of local deep fuzzy models is considered in a distributed setting under differential privacy framework. A local deep fuzzy model, formed by a composition of a finite number of Takagi-Sugeno type fuzzy filters, is learned using variational Bayesian inference. This paper suggests an optimal (epsilon, delta)-differentially private noise adding mechanism that results in multi-fold reduction in noise magnitude over the classical Gaussian mechanism and thus leads to an increased utility for a given level of privacy. Further, the robustness feature, offered by the rule-based fuzzy systems, is leveraged to alleviate the effect of added data noise on the utility. An architecture for distributed form of differentially private learning is suggested where a privacy wall separates the private local training data from the globally shared data, and fuzzy sets and fuzzy rules are used to aggregate robustly the local deep fuzzy models for building the global model. The privacy wall uses noise adding mechanisms to attain differential privacy for each participant's private training data and thus the adversaries have no direct access to the training data. (C) 2020 Elsevier Inc. All rights reserved.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.7
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available