Institutional Repository
Technical University of Crete
EN  |  EL

Search

Browse

My Space

Federated LoRA-Tuning for LLMs

Kelaidis Kanakis

Full record


URI: http://purl.tuc.gr/dl/dias/0AFEB004-A210-4201-9666-C1ABED8AAFAF
Year 2025
Type of Item Diploma Work
License
Details
Bibliographic Citation Kanakis Kelaidis, "Federated LoRA-Tuning for LLMs", Diploma Work, School of Electrical and Computer Engineering, Technical University of Crete, Chania, Greece, 2025 https://doi.org/10.26233/heallink.tuc.105019
Appears in Collections

Summary

Large language models (LLMs) have become essential across a wide spectrum of applications, from conversational agents to code generation, making fine-tuning on domain-specific data a ubiquitous need. Yet their deployment in real-world domains is often constrained by data isolation, computational cost and memory requirements. Centralizing proprietary data is frequently infeasible and forcing each organization to train on its own limited dataset typically yields inferior models. Federated Learning offers a natural solution by enabling multiple clients to collaborate without sharing raw data, but naively applying it to massive architectures remains computationally demanding and communication-intensive. In this thesis, we present a framework for federated fine-tuning of LLMs via Low-Rank Adaptation (LoRA), focusing on efficiency and performance. Building on the recently proposed DP-LoRA framework, we reformulate the original algorithm and aim to evaluate the performance ceiling of federated LoRA-tuning in its non-private form. By introducing small low-rank trainable matrices into transformer attention layers, LoRA reduces the number of tunable parameters by orders of magnitude, making per-client training both feasible and communication-efficient in federated environments. We also implement components for data formatting, inference and parsing for improved data preparation and evaluation, and justify our choice of Gemma3-4B as the backbone model from a plethora of available options. Our experiments compare against the non-private baselines reported in the DP-LoRA study and show that our approach outperforms them, establishing a new benchmark for this setting. These findings highlight the utility of parameter-efficient federated adaptation of LLMs in scenarios where maximizing accuracy and efficiency is the primary goal and suggest promising directions for future research in improving and extending these methods.

Available Files

Services

Statistics