Inicio  /  Algorithms  /  Vol: 15 Par: 9 (2022)  /  Artículo
ARTÍCULO
TITULO

Federated Optimization of l0-norm Regularized Sparse Learning

Qianqian Tong    
Guannan Liang    
Jiahao Ding    
Tan Zhu    
Miao Pan and Jinbo Bi    

Resumen

Regularized sparse learning with the l0 l 0 -norm is important in many areas, including statistical learning and signal processing. Iterative hard thresholding (IHT) methods are the state-of-the-art for nonconvex-constrained sparse learning due to their capability of recovering true support and scalability with large datasets. The current theoretical analysis of IHT assumes the use of centralized IID data. In realistic large-scale scenarios, however, data are distributed, seldom IID, and private to edge computing devices at the local level. Consequently, it is required to study the property of IHT in a federated environment, where local devices update the sparse model individually and communicate with a central server for aggregation infrequently without sharing local data. In this paper, we propose the first group of federated IHT methods: Federated Hard Thresholding (Fed-HT) and Federated Iterative Hard Thresholding (FedIter-HT) with theoretical guarantees. We prove that both algorithms have a linear convergence rate and guarantee for recovering the optimal sparse estimator, which is comparable to classic IHT methods, but with decentralized, non-IID, and unbalanced data. Empirical results demonstrate that the Fed-HT and FedIter-HT outperform their competitor?a distributed IHT, in terms of reducing objective values with fewer communication rounds and bandwidth requirements.

 Artículos similares

       
 
Christian Moya and Guang Lin    
The Deep Operator Network (DeepONet) framework is a different class of neural network architecture that one trains to learn nonlinear operators, i.e., mappings between infinite-dimensional spaces. Traditionally, DeepONets are trained using a centralized ... ver más
Revista: Algorithms