Skip to main page content
U.S. flag

An official website of the United States government

Dot gov

The .gov means it’s official.
Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site.

Https

The site is secure.
The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely.

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2019:11861:133-141.
doi: 10.1007/978-3-030-32692-0_16. Epub 2019 Oct 10.

Privacy-preserving Federated Brain Tumour Segmentation

Affiliations

Privacy-preserving Federated Brain Tumour Segmentation

Wenqi Li et al. Mach Learn Med Imaging. 2019.

Abstract

Due to medical data privacy regulations, it is often infeasible to collect and share patient data in a centralised data lake. This poses challenges for training machine learning algorithms, such as deep convolutional networks, which often require large numbers of diverse training examples. Federated learning sidesteps this difficulty by bringing code to the patient data owners and only sharing intermediate model training updates among them. Although a high-accuracy model could be achieved by appropriately aggregating these model updates, the model shared could indirectly leak the local training examples. In this paper, we investigate the feasibility of applying differential-privacy techniques to protect the patient data in a federated learning setup. We implement and evaluate practical federated learning systems for brain tumour segmentation on the BraTS dataset. The experimental results show that there is a trade-off between model performance and privacy protection costs.

PubMed Disclaimer

Figures

Fig. 1
Fig. 1
Left: illustration of the federated learning system; right: distribution of the training subjects (N = 242) across the participating federated clients (K = 13) studied in this paper.
Fig. 2
Fig. 2. Comparison of segmentation performance on the test set with (left): FL vs. non-FL training, and (right): partial model sharing.
Fig. 3
Fig. 3. Comparison of segmentation models (ave. mean-class Dice score) by varying the privacy parameters: percentage of partial models, ε1, and ε3.

References

    1. Abadi M, et al. Deep Learning with Differential Privacy; SIGSAC Conference on Computer and Communications Security; 2016. pp. 308–318.
    1. Bakas S, et al. Identifying the best machine learning algorithms for brain tumor segmentation, progression assessment, and overall survival prediction in the BRATS challenge. arXiv preprint arxiv. 2018:1811.02629
    1. Hitaj B, Ateniese G, Perez-Cruz F. Deep models under the GAN: information leakage from collaborative deep learning; SIGSAC Conference on Computer and Communications Security; 2017. pp. 603–618.
    1. Kingma DP, Ba J. Adam: A method for stochastic optimization. arXiv preprint arXiv. 2014:1412.6980
    1. Lyu M, Su D, Li N. Understanding the sparse vector technique for differential privacy. Proceedings of the VLDB Endowment. 2017;10(6):637–648.

LinkOut - more resources