4.6 Article

RevFRF: Enabling Cross-Domain Random Forest Training With Revocable Federated Learning

Journal

IEEE TRANSACTIONS ON DEPENDABLE AND SECURE COMPUTING
Volume 19, Issue 6, Pages 3671-3685

Publisher

IEEE COMPUTER SOC
DOI: 10.1109/TDSC.2021.3104842

Keywords

Radio frequency; Collaborative work; Companies; Data models; Servers; Privacy; Data privacy; Privacy-preserving; random forest; revocable federated learning

Funding

  1. National Natural Science Foundation of China [61872283, U1764263, 62072109, U1804263]
  2. CNKLSTISS through the China 111 Project [B16037]
  3. Shaanxi Science & Technology Coordination & Innovation Project [2016TZCG-6-3]
  4. Key R&D Program of Shaanxi Province [2019ZDLGY12-04, 2020ZDLGY09-06]
  5. Natural Science Basic Research Program of Shaanxi [2021JC-22]

Ask authors/readers for more resources

This article introduces a novel federated learning framework RevFRF and discusses the participant revocation issue in federated learning. By implementing homomorphic encryption based secure protocols, RevFRF can efficiently realize federated random forest and ensure the secure removal of memories of revoked participants in the trained model.
Random forest is one of the most heated machine learning tools in a wide range of industrial scenarios. Recently, federated learning enables efficient distributed machine learning without direct revealing of private participant data. In this article, we present a novel framework of federated random forest (RevFRF), and further emphatically discuss the participant revocation problem of federated learning based on RevFRF. Specifically, RevFRF first introduces a suite of homomorphic encryption based secure protocols to implement federated random forest (RF). The protocols cover the whole lifecycle of an RF model, including construction, prediction and participant revocation. Then, referring to the practical application scenarios of RevFRF, the existing federated learning frameworks ignore a fact that even every participant in federated learning cannot maintain the cooperation with others forever. In company-level cooperation, allowing the remaining companies to use a trained model that contains the memories from an off-lying company potentially leads to a significant conflict of interest. Therefore, we propose the revocable federated learning concept and illustrate how RevFRF implements participant revocation in applications. Through theoretical analysis and experiments, we show that the protocols can efficiently implement federated RF and ensure the memories of a revoked participant in the trained RF to be securely removed.

Authors

I am an author on this paper
Click your name to claim this paper and add it to your profile.

Reviews

Primary Rating

4.6
Not enough ratings

Secondary Ratings

Novelty
-
Significance
-
Scientific rigor
-
Rate this paper

Recommended

No Data Available
No Data Available