Research

My research encompasses applied mathematics, networking, game theory, optimal control, and the economics of computing and communication networks. In recent work, I have focused on decentralized and distributed systems, addressing challenges in federated learning, blockchain technologies, automated market makers, datacenter scheduling, and resource allocation in wireless networks. Methodologically, my research integrates game theory, evolutionary games, optimisation, stochastic processes, queuing theory, and distributed algorithms, while remaining responsive to the demands of emerging applications and infrastructures

Distributed and federated learning

Federated learning and optimization are closely intertwined because training a global model across distributed clients requires efficient, adaptive, and robust optimization strategies. In federated learning, each client updates a local model based on its own data, and these updates are aggregated to improve a shared global model. Optimization challenges in this context include heterogeneous data distributions, limited communication bandwidth, device variability, and the presence of malicious clients. Researchers focus on developing advanced optimization algorithms to address these challenges. This includes techniques for client selection and sampling, which prioritize contributions from clients that improve convergence and reduce variance; gradient aggregation and compression, to minimize communication overhead; and robust optimization methods or game-theoretic approaches to mitigate the impact of unreliable or adversarial clients.

Cryptocurrency and Blockchain

Cryptocurrency and blockchain technologies have enabled automated market makers (AMMs), which allow decentralized trading without traditional order books. AMMs use smart contracts and liquidity pools, where token prices are determined by mathematical formulas based on pool balances. Research in this area focuses on optimizing liquidity allocation, minimizing impermanent loss, improving trading efficiency, and ensuring system robustness. The main focus in this area is to design resilient automated market maker (AMM) mechanisms and enhance the performance and security of decentralized finance (DeFi) systems.

Parallel Computing and IA

In the context of distributed model training, GenAI is characterized by a high degree of parallelism achieved by distributing models (or their components, such as tensors) and data across multiple computing nodes. Parallel computing and artificial intelligence (AI) are therefore tightly interconnected, as many AI algorithms, particularly in machine learning and deep learning, require processing large datasets and performing complex computations efficiently. Parallel computing distributes these tasks across multiple processors or cores, enabling faster training, real-time inference, and scalable data processing. A key part of our research in this direction emphasizes the necessity of enhancing network and fabric awareness to better accommodate AI workloads. This in turn enables large-scale neural network training, distributed optimization, and real-time decision-making, while reducing latency and alleviating resource bottlenecks.

Game theory and its applications

Our research in game theory is rooted in the theoretical modeling of strategic interactions in large-scale and decentralized systems. We employ frameworks such as non-cooperative games, cooperative games, evolutionary games, stochastic games, and controlled matching games to study how agents with potentially conflicting objectives behave and interact. These models are used to analyze equilibria, stability, efficiency trade-offs, and performance bounds, providing fundamental insights into resource sharing, competition, cooperation, and decision-making in networks and distributed systems. The overarching goal of this research is to highlight the power of game theory as a mathematical tool to predict outcomes, design incentive mechanisms, and guide the development of robust algorithms for complex environments.