Speeding up distributed learning and optimization: towards low-complexity communication efficient algorithms with superlinear convergence

Venerdì 13 giugno 2025 | 11:30
Dipartimento di Elettronica, Informazione e Bioingegneria - Politecnico di Milano
Sala Conferenze "Emilio Gatti" (Edificio 20)
Speaker: Prof. Subhrakanti Dey (Uppsala University)
Dipartimento di Elettronica, Informazione e Bioingegneria - Politecnico di Milano
Sala Conferenze "Emilio Gatti" (Edificio 20)
Speaker: Prof. Subhrakanti Dey (Uppsala University)
Contatti: Prof. Simone Garatti | simone.garatti@polimi.it
Sommario
Next generation of networked cyber-physical systems will support a number of application domains e.g. connected autonomous vehicular networks, collaborative robotics in smart factories, and many other mission-critical applications. With the advent of massive machine-to-machine communication and IoT networks, huge volumes of data can be collected and processed with low latency through edge computing facilities.
Distributed machine learning enables cross-device collaborative learning without exchanging raw data, ensuring privacy and reducing communication cost. Learning over wireless networks poses significant challenges due to limited communication bandwidth and channel variability, limited computational resources at the IoT devices, the heterogeneous nature of distributed data, and also randomly time-varying network topologies.
In this talk, we will present (i) low-complexity communication efficient Federated Learning (FL) algorithms based on approximate Newton-type optimization techniques employed at the local agents, which achieve superlinear convergence rate as opposed to linear rates achieved by state-of-the-art gradient descent based algorithms, and (ii) fully distributed network Newton type algorithms based on a distributed version of the well-known GIANT algorithm.
While consensus based distributed optimization algorithms are naturally limited to linear convergence rates, we will show that one can design finite-time consensus based distributed network-Newton type algorithms that can achieve superlinear convergence with respect to the number of Newton steps, albeit at the cost of increased numbers of consensus rounds. We will conclude with some new research directions.
Distributed machine learning enables cross-device collaborative learning without exchanging raw data, ensuring privacy and reducing communication cost. Learning over wireless networks poses significant challenges due to limited communication bandwidth and channel variability, limited computational resources at the IoT devices, the heterogeneous nature of distributed data, and also randomly time-varying network topologies.
In this talk, we will present (i) low-complexity communication efficient Federated Learning (FL) algorithms based on approximate Newton-type optimization techniques employed at the local agents, which achieve superlinear convergence rate as opposed to linear rates achieved by state-of-the-art gradient descent based algorithms, and (ii) fully distributed network Newton type algorithms based on a distributed version of the well-known GIANT algorithm.
While consensus based distributed optimization algorithms are naturally limited to linear convergence rates, we will show that one can design finite-time consensus based distributed network-Newton type algorithms that can achieve superlinear convergence with respect to the number of Newton steps, albeit at the cost of increased numbers of consensus rounds. We will conclude with some new research directions.
Biografia
Subhrakanti Dey received the Ph.D. degree from the Department of Systems Engineering, Research School of Information Sciences and Engineering, Australian National University, Canberra, in 1996. He is currently a Professor and Head of the Signals and Systems division in the Dept of Electrical Engineering at Uppsala University, Sweden. He has also held professorial positions at National University of Ireland, Maynooth, and University of Melbourne, Australia.
His current research interests include networked control systems, distributed machine learning and optimization, and detection and estimation theory for wireless sensor networks. He is a Senior Editor for IEEE Transactions of Control of Network Systems and IEEE Control Systems Letters, and an Associate Editor for Automatica. He is a Fellow of the IEEE.
His current research interests include networked control systems, distributed machine learning and optimization, and detection and estimation theory for wireless sensor networks. He is a Senior Editor for IEEE Transactions of Control of Network Systems and IEEE Control Systems Letters, and an Associate Editor for Automatica. He is a Fellow of the IEEE.