Privacy-Preserving federated learning in medical diagnosis with homomorphic re-Encryption

Publication Name

Computer Standards and Interfaces

Abstract

Unlike traditional centralized machine learning, distributed machine learning provides more efficient and useful application scenarios. However, distributed learning may not meet some security requirements. For example, in medical treatment and diagnosis, an increasing number of people are using IoT devices to record their personal data, when training medical data, the users are not willing to reveal their private data to the training party. How to collect and train the data securely has become the main problem to be resolved. Federated learning can combine a large amount of scattered data for training, and protect user data. Compared with general distributed learning, federated learning is more suitable for training on scattered data. In this paper, we propose a privacy-preserving federated learning scheme that is based on the cryptographic primitive of homomorphic re-encryption, which can protect user data through homomorphic re-encryption and trains user data through batch gradient descent (BGD). In our scheme, we use the IoT device to encrypt and upload user data, the fog node to collect user data, and the server to complete data aggregation and re-encrypting. Besides, the security analysis and experimental results show that our scheme can complete model training while preserving user data and local models.

Open Access Status

This publication is not available as open access

Volume

80

Article Number

103583

Funding Number

61862011

Funding Sponsor

National Natural Science Foundation of China

Share

COinS
 

Link to publisher version (DOI)

http://dx.doi.org/10.1016/j.csi.2021.103583