Rethinking Secure Aggregation in FL!
Secure model aggregation is a key component of federated learning (FL) that aims at protecting the privacy of each user's individual model while allowing their global aggregation. State-of-the-art secure aggregation protocols essentially rely on secret sharing of the random seeds that are used for mask generations at the users, in order to enable the reconstruction and cancellation of those belonging to dropped users. The complexity of such approaches, however, grows substantially with the number of users. We propose a new approach, named LightSecAgg, to overcome this bottleneck by turning the focus from "random-seed reconstruction of the dropped users" to "one-shot aggregate-mask reconstruction of the active users". We evaluate LightSecAgg via extensive experiments for training diverse models on various datasets in a realistic FL system and demonstrate that LightSecAgg significantly reduces the total training time, achieving a performance gain of up to 12.7x over baselines.
For more information refer to the paper: https://arxiv.org/abs/2109.14236