Institutional Repository
Technical University of Crete
EN  |  EL

Search

Browse

My Space

Optimizing network-friendly recommendations and caching jointly, using reinforcement learning

Alogoskoufis Alexandros

Full record


URI: http://purl.tuc.gr/dl/dias/D41C4EEA-6FF6-4D01-A651-4E3AF043D58B
Year 2025
Type of Item Diploma Work
License
Details
Bibliographic Citation Alexandros Alogoskoufis, "Optimizing network-friendly recommendations and caching jointly, using reinforcement learning", Diploma Work, School of Electrical and Computer Engineering, Technical University of Crete, Chania, Greece, 2025 https://doi.org/10.26233/heallink.tuc.103088
Appears in Collections

Summary

The rapid expansion of content streaming platforms has driven a significant surge in global internet traffic. Video-on-demand services, music streaming platforms, and online news providers serve billions of users, each expecting seamless, personalized experiences with minimal delays. To meet these demands, Content Delivery Networks (CDNs) and distributed caching infrastructures store frequently accessed content closer to end users, reducing latency and alleviating network congestion. Simultaneously, sophisticated recommendation systems enhance user engagement by tailoring content suggestions to individual preferences. However, despite their widespread adoption, caching and recommendation systems are typically treated as independent problems, leading to inefficiencies and increased operational costs.A core challenge lies in the exponential growth of digital content and the rising number of mobile users who expect uninterrupted access across diverse devices and network conditions. Caching systems prioritize network efficiency by storing popular content in strategic locations, while recommendation systems focus purely on engagement, often suggesting content that is not locally cached. This misalignment results in frequent cache misses, increased bandwidth consumption, and higher latency, limiting the scalability of current content delivery solutions.Recent research has explored cache-aware recommendation strategies that adjust content suggestions based on cache availability to reduce delivery costs. However, these approaches predominantly rely on heuristics or static optimization techniques that struggle to adapt to dynamic user behavior and evolving network conditions. Such methods fail to fully leverage machine learning-driven decision-making, which can dynamically optimize both recommendations and caching policies in real-time.To address this gap, this thesis introduces a Reinforcement Learning (RL)-based framework that jointly optimizes recommendation and caching decisions. We formulate the problem as a Markov Decision Process (MDP), capturing the interplay between user preferences, cache dynamics, and network constraints. Our approach employs Double Deep Q-Networks (DDQN) to learn adaptive policies that balance content relevance and caching efficiency, enhancing both user experience and network resource utilization. Unlike traditional methods that rely on predefined content popularity distributions, our framework continuously learns optimal strategies through interaction with the environment.To evaluate the effectiveness of our approach, we conduct extensive simulations using both synthetic and real-world datasets, comparing our RL-based system against baseline heuristics that optimize caching and recommendations separately. Experimental results demonstrate that our method achieves superior cache hit rates, reduces bandwidth consumption, and improves user satisfaction across varying session lengths and cache sizes. Additionally, we perform sensitivity analyses to assess the impact of user behavior and, cache capacity on system performance.This research advances the field by demonstrating that network-aware, RL-driven recommendations can significantly enhance CDN efficiency while preserving high levels of personalization. Our findings pave the way for self-optimizing content delivery networks that dynamically adapt to user demand and network conditions without manual intervention, offering a scalable and intelligent solution for future content distribution challenges.

Available Files

Services

Statistics