New paper at PETS’23

"How Much Privacy Does Federated Learning with Secure Aggregation Guarantee?" While there has been a lot of work on algorithmic innovations in federated machine learning, there hasn't been much on quantifying its privacy guarantees by itself (without resorting to differential privacy, etc). In our recent PETS'23 paper with Ahmed Elkordy, Yahya Ezzeldin, Jiang Zhang, and Kostas Psounis, we have shown a very promising result (from privacy protection perspective) for federated learning. We have demonstrated that, by leveraging secure aggregation, the privacy leakage in federated learning decays as 1/cohort_size. In other words, similar to decentralized blockchain systems, federated learning systems also get more and more privacy preserving as more users join the cohort.

 
Cibus Consulting

Based in Southern California, we are a branding and design agency specializing in creating full-scale digital solutions for our clients. Our core services include Website Design, SEO & Ads Management, Digital Marketing, IT Implementation, and Business Development. We use our deep industry knowledge, rigorous analysis, and data-driven insights to help clients modernize their business operations and unlock their greatest earnings potential.

https://www.cibusconsulting.com
Previous
Previous

Keynote/Panel/Paper at MLSYS’22!

Next
Next

Keynote at IJCAI-FL’22