Deep Learning based Collaborative Edge Caching for Mobile Edge Networks

Student thesis: Doctoral ThesisDoctor of Philosophy

Abstract

Currently, massive end devices connect to the wireless network to access various services and applications, e.g., content access, environmental monitoring and virtual management. This inevitably results in significant backbone network congestion, leading to a serious degradation in the quality of service/experience (QoS/QoE). To address these issues, mobile edge computing (MEC) has emerged as a promising paradigm, facilitating computation, caching and communication capabilities at network edges, e.g., base stations (BSs). Accordingly, edge caching is proposed as an efficient approach to lower retrieving latency and alleviate network congestion by caching popular con-tent in proximity to users. Nonetheless, several critical challenges must be addressed before embracing the full potential of edge caching. First, the problem of how to precisely predict users’ requests and cache the proper content in storage-constrained edge servers remains to be solved. Second, designing collaborative caching policies for distributed edge servers in unknown and dynamic environments remains a challenging issue. Conventional caching policies, such as least recently used (LRU) and least frequently used (LFU), cannot adapt to complex scenarios, particularly when the environment is dynamic or even unknown. In contrast, machine learning algorithms have the ability to learn and capture hidden features of massive request data in dynamic networks. Especially, reinforcement learning (RL) algorithms can iteratively optimize caching policies through repeated interaction with the environment by learning from either online or offline experiences.

Considering the aforementioned challenges and the advantages of machine learning algorithms shown in dealing with environment dynamics and request uncertainties, this thesis focuses on the design of optimal caching strategies for collaborative edge caching networks by leveraging advances in both optimization and machine learning.

The general aim of this thesis is to apply novel optimization and machine learning methods to collaborative edge caching systems, in order to enhance performance in resource utilization from a caching perspective, as well as in scalability and robustness from an algorithmic design perspective. Accordingly, this thesis introduces three innovative caching schemes to realize this aim. The first work develops a novel caching policy optimization approach by integrating conventional RL techniques with evolutionary algorithms, to enhance collaboration in a MEC system with heterogeneous scenarios. The second work proposes a two-phase proactive caching scheme based on deep Q network (DQN) to overcome the curse of dimensionality in a computationally efficient manner. In the third contribution, we focus on the joint cache update and request delivery problem in scalable collaborative edge networks. An innovative unified federated DQN caching scheme is developed to minimize the system cost while ensuring QoS. The performance of these proposed caching schemes is compared and analyzed in complex and dynamic scenarios through comprehensive experiments, which highlights the advantages of the proposed schemes in terms of caching performance, scalability and robustness.
Date of Award1 Jul 2024
Original languageEnglish
Awarding Institution
  • King's College London
SupervisorMohammad Nakhai (Supervisor)

Cite this

'