Deep Reinforcement Learning-Based Joint User Association and CU–DU Placement in O-RAN

Published in IEEE Transactions on Network and Service Management, 2022

Recommended citation: R. Joda, T. Pamuklu, P. E. Iturria-Rivera and M. Erol-Kantarci, "Deep Reinforcement Learning-Based Joint User Association and CU–DU Placement in O-RAN," IEEE Transactions on Network and Service Management, vol. 19, no. 4, pp. 4097-4110, Dec. 2022. https://ieeexplore.ieee.org/document/9946423

Open Radio Access Networks (O-RAN) architecture is based on disaggregation, virtualization, openness, and intelligence. These features allow the RAN network functions (NFs) to be split into Central Unit (CU), Distributed Unit (DU), and Radio Unit (RU); and deployed on open hardware and cloud nodes as Virtualized Network Functions (VNFs) or Containerized Network Functions (CNFs). In this paper, we propose strategies for the placement of CU and DU network functions in the regional and edge O-Cloud nodes while jointly associating the users to RUs. The aim is to minimize the end-to-end delay of users and minimize the cost of O-RAN deployment. Thus, we first formulate the end-to-end delay, the cost, and the constraints. We then model the problem as a multi-objective optimization problem The optimization formulation consists of a huge number of constraints and variables. To provide a solution to the problem, we develop the corresponding Markov Decision Problem (MDP) and propose a Deep Q-Network (DQN)-based algorithm. The simulation results demonstrate that our proposed scheme reduces the average user delay up to 40% and the deployment cost up to 20% with respect to our baselines.

Download paper here

Recommended citation: R. Joda, T. Pamuklu, P. E. Iturria-Rivera and M. Erol-Kantarci, “Deep Reinforcement Learning-Based Joint User Association and CU–DU Placement in O-RAN,” IEEE Transactions on Network and Service Management, vol. 19, no. 4, pp. 4097- 4110, Dec. 2022.