Skip to Main content Skip to Navigation
New interface
Conference papers

A General Theory for Client Sampling in Federated Learning

Abstract : While client sampling is a central operation of current state-of-the-art federated learning (FL) approaches, the impact of this procedure on the convergence and speed of FL remains under-investigated. In this work, we provide a general theoretical framework to quantify the impact of a client sampling scheme and of the clients heterogeneity on the federated optimization. First, we provide a unified theoretical ground for previously reported sampling schemes experimental results on the relationship between FL convergence and the variance of the aggregation weights. Second, we prove for the first time that the quality of FL convergence is also impacted by the resulting covariance between aggregation weights. Our theory is general, and is here applied to Multinomial Distribution (MD) and Uniform sampling, two default unbiased client sampling schemes of FL, and demonstrated through a series of experiments in non-iid and unbalanced scenarios. Our results suggest that MD sampling should be used as default sampling scheme, due to the resilience to the changes in data ratio during the learning process, while Uniform sampling is superior only in the special case when clients have the same amount of data.
Document type :
Conference papers
Complete list of metadata
Contributor : Yann Fraboni Connect in order to contact the contributor
Submitted on : Tuesday, July 12, 2022 - 9:42:25 AM
Last modification on : Thursday, November 17, 2022 - 11:45:17 AM


Files produced by the author(s)


  • HAL Id : hal-03500307, version 2


Yann Fraboni, Richard Vidal, Laetitia Kameni, Marco Lorenzi. A General Theory for Client Sampling in Federated Learning. International Workshop on Trustworthy Federated Learning in Conjunction with IJCAI 2022 (FL-IJCAI'22), Jul 2022, Vienna, Austria. ⟨hal-03500307v2⟩



Record views


Files downloads