Doktorsavhandling

Firooz Shahriari Mehr, Data Science och AI

Convex Optimization for Machine Learning over Graphs: From Collaborative Learning to Node Classification

Översikt


Machine learning and optimization are deeply interconnected fields, with optimization forming the backbone of most machine learning methods. This thesis explores two directions in which optimization contributes to machine learning. The first concerns the design and analysis of distributed optimization algorithms for efficient training of machine learning models. The second focuses on the development of a convex optimization framework for node classification that jointly leverages node features and graph structure.

In the first part of this thesis, we introduce DAGP, a decentralized optimization algorithm for peer-to-peer communication networks, enabling private and scalable training at the network edge. To address stragglers, we propose ASY-DAGP, an asynchronous extension that still converges to the optimal solution under mild conditions. In applications such as decentralized federated learning, where data distributions across agents are heterogeneous, we show that our constrained framework naturally supports personalization, allowing for personalized variants of DAGP. Finally, we introduce Linear Quadratic Performance Estimation Problem (LQ-PEP), a new convergence analysis methodology that systematically derives convergence rates without relying on manually crafted Lyapunov functions.

In the second part of this thesis, we propose a novel optimization framework for transductive node classification that integrates graph clustering with a regularization term based on node features. To the best of our knowledge, this is the first work to theoretically demonstrate the synergistic effect of combining these two sources of information, i.e., the graph structure and the node features. We prove that perfect label recovery can be achieved under milder conditions than when using either source alone.