744
Private, yet Practical, Multiparty Deep Learning
Xinyang Zhang, Shouling Ji, Hui Wang and Ting Wang
Lehigh University, Zhejiang University, Stevens Institute of Technology, Lehigh University

In this paper, we consider the problem of multiparty deep learning (MDL), wherein autonomous data owners wish to jointly train accurate deep neural network (DNN) models without sharing their private data. We design, implement and evaluate MDL/, a new MDL paradigm built upon three primitives: asynchronous optimization, lightweight homomorphic encryption, and threshold secret sharing. Compared with prior work, MDL/ departs in significant ways: a) besides providing explicit privacy guarantee, it retains desirable model utility, which is paramount for accuracy-critical applications (e.g., healthcare predictive modeling); b) it provides an intuitive handle for the operators to gracefully balance model utility and training efficiency; c) moreover, it supports delicate control over communication and computational costs by offering two variants, C-MDL/ and D-MDL/, operating under loose and tight coordination respectively, thus optimizable for given system settings (e.g., limited versus sufficient network bandwidth). Through extensive empirical evaluation using benchmark datasets and DNN architectures, we demonstrate the efficacy of MDL/.