My research in short: theoretical optimization and learning, especially for multi-agent systems, with a focus on addressing practical challenges.
Decentralized optimization and learning
The deployment of an ever-increasing number of devices with computational and connectivity resources has given rise to multi-agent systems (MASs) in a wide range of applications. Therefore, there is a need for decentralized algorithms to enable decision making and cooperation in such systems. However, the challenges that arise in this context require tailored theoretical developments and design principles.
All decentralized algorithms are characterized by an alternation of local computations, based on information and data privately stored by each agent, and communications, either peer-to-peer or with a central coordinator.
Asynchronous & robust distributed optimization
The agents in a MAS are often equipped with heterogeneous resources, and thus may complete local computations asynchronously. Moreover, communications between the agents may be lossy or noisy. This motivated our work on a distributed version of ADMM (Alternating Direction Method of Multipliers), culminating in:
- N. Bastianello, R. Carli, L. Schenato, M. Todescato. “Asynchronous Distributed Optimization over Lossy Networks via Relaxed ADMM: Stability and Linear Convergence.” IEEE Trans. Automatic Control, vol. 66, no. 6, pp. 2620-2635, Jun. 2021 link, arXiv
Other related references are:
- N. Bastianello, L. Schenato, R. Carli. “A novel bound on the convergence rate of ADMM for distributed optimization.” Automatica, vol. 142, pp. 110403, Aug. 2022 link
- A. Olama, N. Bastianello, P. Da Costa Mendes, E. Camponogara. “Relaxed Hybrid Consensus ADMM for Distributed Convex Optimization with Coupling Constraints.” IET Control Theory & Applications, vol. 13, no. 17, pp. 2828-2837, Nov. 2019 link
- N. Bastianello, R. Carli, L. Schenato, M. Todescato. “A Partition-Based Implementation of the Relaxed ADMM for Distributed Convex Optimization over Lossy Networks.” 2018 IEEE Conference on Decision and Control (CDC’18), Dec. 2018, pp. 3379-3384 link, arXiv
- N. Bastianello, M. Todescato, R. Carli, L. Schenato. “Distributed Optimization over Lossy Networks via Relaxed Peaceman-Rachford Splitting: a Robust ADMM Approach.” 2018 European Control Conference (ECC’18), Jun. 2018, pp. 477-482 link, arXiv
(Dynamic) average consensus and beyond
The observation that (dynamic) average consensus can be formulated as a distributed optimization problem sparked this offshoot project. The following references discuss the design of (dynamic) average consensus on the blueprint of the distributed ADMM:
- D. Deplano*, N. Bastianello*, M. Franceschelli, K. H. Johansson. “A unified approach to solve the dynamic consensus on the average, maximum, and median values with linear convergence.” 2023 IEEE Conference on Decision and Control (CDC’23), Dec. 2023, pp. 6442-6448 [* equal contribution] link
- N. Bastianello, R. Carli. “ADMM for Dynamic Average Consensus over Imperfect Networks.” IFAC Conference on Networked Systems (NecSys’22), Jul. 2022, IFAC-PapersOnLine vol. 55, no. 13, pp. 228-233 link
These works further spurred the development of a more general set of theoretical results, discussed in:
- N. Bastianello*, D. Deplano*, M. Franceschelli, K. H. Johansson. “Robust Online Learning over Networks.” IEEE Trans. Automatic Control [to appear] [* equal contribution] arXiv
Robustifying gradient tracking
In the context of distributed optimization, we can distinguish two main approaches: gradient-based, including DGD and gradient tracking, and dual methods, including ADMM. In particular, the class of gradient tracking algorithms combine local gradient descent steps with a dynamic average consensus to achieve, asymptotically, consensus on the optimal solution. However, the usual consensus protocols employed for gradient tracking lack robustness. The insight therefore was to use ADMM-based dynamic average consensus instead, leading to:
- G. Carnevale, N. Bastianello, R. Carli, G. Notarstefano. “Distributed Consensus Optimization via ADMM-Tracking Gradient.” 2023 IEEE Conference on Decision and Control (CDC’23), Dec. 2023, pp. 290-295 link
- G. Carnevale, N. Bastianello, G. Notarstefano, R. Carli. “ADMM-Tracking Gradient for Distributed Optimization over Asynchronous and Unreliable Networks.” arXiv
Private federated learning
The previous projects addressed a distributed set-up, in which the agents rely on peer-to-peer communications. Another interesting set-up is the federated one, in which the agents communicate with a coordinator that aggregates locally trained models. While the challenge of achieving consensus is solved by the coordinator, other challenges arise, among which: limited communications and privacy. In the following reference we address both, and actually show the synergy between the design principles that the inspire:
- N. Bastianello, C. Liu, K. H. Johansson. “Enhancing Privacy in Federated Learning through Local Training.” arXiv
Online optimization
Technological advances in many disciplines have recently motivated a growing interest in online (or time-varying) optimization. In traditional static optimization (also called batch or time-invariant), the data of a problem are collected once and then a solver is applied until a solution (or a good approximation thereof) is reached. Online optimization instead features streaming data sources, in the sense that new data are revealed in a sequential manner over time, rather than all at once.
The common approach to online optimization is to convert static algorithms to the online set-up, and analyze their performance. However, this approach may actually result in poor performance. My research has thus focused on designing structured algorithms that leverage information on the online problem to improve performance.
Prediction-correction
A first approach to structured online optimization is prediction-correction, which augments a classic online optimization algorithm (the correction step) with a warm-starting prediction step. This research culminated in:
- N. Bastianello, R. Carli, A. Simonetto. “Extrapolation-based Prediction-Correction Methods for Time-varying Convex Optimization.” Signal Processing, vol. 210, pp. 109089, Sep. 2023. link, arXiv
And other reference are:
- N. Bastianello, A. Simonetto, R. Carli. “Prediction-Correction Splittings for Time-Varying Optimization with Intermittent Observations.” IEEE Control Systems Letters, vol. 4, no. 2, pp. 373-378, Apr. 2020 link
- N. Bastianello, A. Simonetto, R. Carli. “Prediction-Correction Splittings for Nonsmooth Time-Varying Optimization.” 2019 European Control Conference (ECC’19), Jun. 2019, pp. 1963-1968 link, arXiv
- N. Bastianello, A. Simonetto, R. Carli. “Prediction-Correction for Nonsmooth Time-Varying Optimization via Forward-Backward Envelopes.” 2019 International Conference on Acoustics, Speech, and Signal Processing (ICASSP’19), May 2019, pp. 5581-5585 link, arXiv
Control theory for online optimization
Another approach to designing structured online optimization is to leverage control theory. The insight is that a quadratic online problem can be interpreted as a linear (robust) control problem, which allowed as to use the internal-model principle to design a novel algorithm. Interestingly, this algorithm outperforms unstructured methods (such as the online gradient) even on non-quadratic problems. This approach is discussed in:
- N. Bastianello, R. Carli, S. Zampieri. “Internal Model-Based Online Optimization.” IEEE Trans. Automatic Control, vol. 69, no. 1, pp. 689–696, Jan. 2024 link, arXiv
And further extended to constrained problems in:
- U. Casti, N. Bastianello, R. Carli, S. Zampieri. “A Control Theoretical Approach to Online Constrained Optimization.” arXiv
Learning to optimize
The previous project applied control theory for online optimization. This project instead developed the use of learning techniques to design enhanced online algorithms. The project fits in the rising research area of learning to optimize (L2O):
- N. Bastianello, A. Simonetto, E. Dall’Anese. “OpReg-Boost: Learning to Accelerate Online Algorithms with Operator Regression.” Proceedings of The 4th Annual Learning for Dynamics and Control Conference (L4DC’22), PMLR 168, pp. 138-152 link, arXiv, code
Distributed & online optimization
Combining my research on distributed optimization and online optimization gave rise to a few works:
- N. Bastianello, A. I. Rikos, K. H. Johansson. “Online Distributed Learning with Quantized Finite-Time Coordination.” 2023 IEEE Conference on Decision and Control (CDC’23), Dec. 2023, pp. 5026-5032 link, arXiv
- N. Bastianello, E. Dall’Anese. “Distributed and Inexact Proximal Gradient Method for Online Convex Optimization.” 2021 European Control Conference (ECC’21), Jun. 2021, pp. 2432-2437 link, arXiv
- N. Bastianello, A. Simonetto, R. Carli. “Distributed Prediction-Correction ADMM for Time-Varying Convex Optimization.” 54th Asilomar Conference on Signals, Systems and Computers, Nov. 2020, pp. 47-52 link, arXiv
Stochastic operator theory
Abstracting from both my research on distributed optimization and online optimization, I developed a more theoretical project on stochastic operator theory. The insight is that a large number of optimization algorithms in challenging scenarios (e.g. with asynchrony or online costs) can be modeled as stochastic operators. This project then developed a unified and thorough convergence analysis for this class of algorithms:
- N. Bastianello, L. Madden, R. Carli, E. Dall’Anese. “A Stochastic Operator Framework for Optimization and Learning with Sub-Weibull Errors.” IEEE Trans. Automatic Control [to appear] arXiv
The reference:
- N. Bastianello*, D. Deplano*, M. Franceschelli, K. H. Johansson. “Robust Online Learning over Networks.” IEEE Trans. Automatic Control [to appear] [* equal contribution] arXiv
can be seen as a parallel development that exploits many of the same tools, but with a more focused scope.
Moreover, these results have already been brought to bear in the context of feedback optimization, where a system is controlled by the output of an (inherently online) optimization algorithm:
- A. M. Ospina, N. Bastianello, E. Dall’Anese. “Feedback-Based Optimization with Sub-Weibull Gradient Errors and Intermittent Updates.” IEEE Control Systems Letters, vol. 6, pp. 2521-2526, 2022 link, arXiv
Specific applications
This section collects recent works on specific applications.
Fault detection
This work developed a fault detection and categorization algorithm for reaction wheels in satellites. The research was carried out in the context of the Horizon Europe ULTIMATE project:
- A. Penacho Riveiros, Y. Xing, N. Bastianello, K. H. Johansson. “Real-Time Anomaly Detection and Categorization for Satellite Reaction Wheels.” 2024 European Control Conference (ECC’24), Jun. 2024
6G
We are currently laying the foundations for the next generation of communication networks. This developing project gave rise to the following:
- Ö. T. Demir, L. Méndez-Monsanto, N. Bastianello, E. Fitzgerald, G. Callebaut. “Energy Reduction in Cell-Free Massive MIMO through Fine-Grained Resource Management.” 2024 EuCNC & 6G Summit, Jun. 2024