dc.contributor.author |
Sande, Malcolm Makomborero
|
|
dc.contributor.author |
Hlophe, Mduduzi Comfort
|
|
dc.contributor.author |
Maharaj, Bodhaswar Tikanath Jugpershad
|
|
dc.date.accessioned |
2022-03-31T11:07:23Z |
|
dc.date.available |
2022-03-31T11:07:23Z |
|
dc.date.issued |
2021-08 |
|
dc.description.abstract |
Congestion in dense traf c networks is a prominent obstacle towards realizing the performance
requirements of 5G new radio. Since traditional adaptive traf c signal control cannot resolve this type of
congestion, realizing context in the network and adapting resource allocation based on real-time parameters is
an attractive approach. This article proposes a radio resource management solution for congestion avoidance
on the access side of an integrated access and backhaul (IAB) network using deep reinforcement learning
(DRL). The objective of this article is to obtain an optimal policy under which the transmission throughput
of all UEs is maximized under the dictates of environmental pressures such as traf c load and transmission
power. Here, the resource management problem was converted into a constrained problem using Markov
decision processes and dynamic power management, where a deep neural network was trained for optimal
power allocation. By initializing a power control parameter, t , with zero-mean normal distribution, the DRL
algorithm adopts a learning policy that aims to achieve logical allocation of resources by placing more
emphasis on congestion control and user satisfaction. The performance of the proposed DRL algorithm was
evaluated using two learning schemes, i.e., individual learning and nearest neighbor cooperative learning,
and this was compared with the performance of a baseline algorithm. The simulation results indicate that
the proposed algorithms give better overall performance when compared to the baseline algorithm. From the
simulation results, there is a subtle, but critically important insight that brings into focus the fundamental
connection between learning rate and the two proposed algorithms. The nearest neighbor cooperative
learning algorithm is suitable for IAB networks because its throughput has a good correlation with the
congestion rate. |
en_ZA |
dc.description.department |
Electrical, Electronic and Computer Engineering |
en_ZA |
dc.description.librarian |
am2022 |
en_ZA |
dc.description.sponsorship |
The Sentech Chair in Broadband Wireless Multimedia Communications (BWMC), University of Pretoria. |
en_ZA |
dc.description.uri |
http://ieeexplore.ieee.org/xpl/RecentIssue.jsp?punumber=6287639 |
en_ZA |
dc.identifier.citation |
Sande, M.M., Hlophe, M.C., Maharaj, B.T. 2021, 'Access and radio resource management for IAB networks using deep reinforcement learning', IEEE Access, vol. 9, pp. 114218-114234. |
en_ZA |
dc.identifier.issn |
2169-3536 (online) |
|
dc.identifier.other |
10.1109/ACCESS.2021.3104322 |
|
dc.identifier.uri |
http://hdl.handle.net/2263/84738 |
|
dc.language.iso |
en |
en_ZA |
dc.publisher |
Institute of Electrical and Electronics Engineers |
en_ZA |
dc.rights |
This work is licensed under a Creative Commons Attribution 4.0 License. |
en_ZA |
dc.subject |
Congestion control |
en_ZA |
dc.subject |
Millimeter wave |
en_ZA |
dc.subject |
Nearest neighbor |
en_ZA |
dc.subject |
Resource allocation |
en_ZA |
dc.subject |
Integrated access and backhaul (IAB) |
en_ZA |
dc.subject |
Deep reinforcement learning (DRL) |
en_ZA |
dc.title |
Access and radio resource management for IAB networks using deep reinforcement learning |
en_ZA |
dc.type |
Article |
en_ZA |