Deep Reinforcement Learning-Based Grant-Free NOMA Optimization for mURLLC

Yan Liu, Yansha Deng*, Hui Zhou, Maged Elkashlan, Arumugam Nallanathan

*Corresponding author for this work

Research output: Contribution to journalArticlepeer-review

5 Citations (Scopus)


Grant-free non-orthogonal multiple access (GF-NOMA) is a potential technique to support massive Ultra-Reliable and Low-Latency Communication (mURLLC) service. However, the dynamic resource configuration in GF-NOMA systems is challenging due to random traffics and collisions, that are unknown at the base station (BS). Meanwhile, joint consideration of the latency and reliability requirements makes the resource configuration of GF-NOMA for mURLLC more complex. To address this problem, we develop a novel learning framework for signature-based GF-NOMA in mURLLC service taking into account the multiple access signature collision, the UE detection, as well as the data decoding procedures for the K-repetition GF and the Proactive GF schemes. The goal of our learning framework is to maximize the long-term average number of successfully served users (UEs) under the latency constraint. We first perform a real-time repetition value configuration based on a double deep Q-Network (DDQN) and then propose a Cooperative Multi-Agent learning technique based DQN (CMA-DQN) to optimize the configuration of both the repetition values and the contention-transmission unit (CTU) numbers. Our results show the superior performance of CMA-DQN over the conventional load estimation-based uplink resource configuration approach (LE-URC) in heavy traffic and demonstrate its capability in dynamically configuring in long term for mURLLC service. In addition, with our learning optimization, the Proactive scheme always outperforms the K-repetition scheme in terms of the number of successfully served UEs, especially under the high backlog traffic scenario.

Original languageEnglish
Pages (from-to)1475-1490
Number of pages16
Issue number3
Publication statusPublished - 1 Mar 2023


  • deep reinforcement learning
  • grant free
  • mURLLC
  • NOMA
  • resource configuration


Dive into the research topics of 'Deep Reinforcement Learning-Based Grant-Free NOMA Optimization for mURLLC'. Together they form a unique fingerprint.

Cite this