Deep reinforcement learning agents for dynamic spectrum access in television whitespace cognitive radio networks

Show simple item record

dc.contributor.author Ukpong, Udeme C.
dc.contributor.author Idowu-Bismark, Olabode
dc.contributor.author Adetiba, Emmanuel
dc.contributor.author Kala, Jules R.
dc.contributor.author Owolabi, Emmanuel
dc.contributor.author Oshin, Oluwadamilola
dc.contributor.author Abayomi, Abdultaofeek
dc.contributor.author Dare, Oluwatobi E.
dc.date.accessioned 2025-02-07T07:51:46Z
dc.date.available 2025-02-07T07:51:46Z
dc.date.issued 2025-03
dc.description.abstract Businesses, security agencies, institutions, and individuals depend on wireless communication to run their day-to-day activities successfully. The ever-increasing demand for wireless communication services, coupled with the scarcity of available radio frequency spectrum, necessitates innovative approaches to spectrum management. Cognitive Radio (CR) technology has emerged as a pivotal solution, enabling dynamic spectrum sharing among secondary users while respecting the rights of primary users. However, the basic setup of CR technology is insufficient to manage spectrum congestion, as it lacks the ability to predict future spectrum holes, leading to interferences. With predictive intelligence and Dynamic Spectrum Access (DSA), a CR can anticipate when and where other users will be using the radio frequency spectrum, allowing it to overcome this limitation. Reinforcement Learning (RL) in CRs helps predict spectral changes and identify optimal transmission frequencies. This work presents the development of Deep RL (DRL) models for enhanced DSA in TV Whitespace (TVWS) cognitive radio networks using Deep Q-Networks (DQN) and Quantile-Regression (QR-DQN) algorithms. The implementation was done in the Radio Frequency Reinforcement Learning (RFRL) Gym, a training environment of the RF spectrum designed to provide comprehensive functionality. Evaluations show that the DQN model achieves a 96.34 % interference avoidance rate compared to 95.97 % of QRDQN. Average latency was estimated at 1 millisecond and 3.33 milliseconds per packet, respectively. Therefore DRL proves to be a more flexible, scalable, and adaptive approach to dynamic spectrum access, making it particularly effective in the complex and constantly evolving wireless spectrum environment. en_US
dc.description.department Electrical, Electronic and Computer Engineering en_US
dc.description.librarian hj2024 en_US
dc.description.sdg SDG-09: Industry, innovation and infrastructure en_US
dc.description.sponsorship The Covenant Applied Informatics and Communication Africa Centre of Excellence (CApICACE), Google through the Google Award for TensorFlow Outreaches in Colleges. en_US
dc.description.uri https://www.elsevier.com/locate/sciaf en_US
dc.identifier.citation Ukpong, U.C., Idowu-Bismark, O., Adetiba, E. et al. 2025, 'Deep reinforcement learning agents for dynamic spectrum access in television whitespace cognitive radio networks', Scientific African, vol. 27, art. e02523, pp. 1-16, doi : 10.1016/j.sciaf.2024.e02523. en_US
dc.identifier.issn 2468-2276 (online)
dc.identifier.other 10.1016/j.sciaf.2024.e02523
dc.identifier.uri http://hdl.handle.net/2263/100608
dc.language.iso en en_US
dc.publisher Elsevier en_US
dc.rights © 2024 The Authors. Published by Elsevier B.V. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). en_US
dc.subject Wireless communication en_US
dc.subject Cognitive radio networks en_US
dc.subject Deep reinforcement learning (DRL) en_US
dc.subject Deep Q-networks (DQN) en_US
dc.subject Dynamic spectrum access (DSA) en_US
dc.subject Quantile-regression deep Q-networks (QR-DQN) en_US
dc.subject RFRL gym en_US
dc.subject Television whitespace (TVWS) en_US
dc.subject Radio frequency reinforcement learning (RFRL) en_US
dc.subject SDG-09: Industry, innovation and infrastructure en_US
dc.title Deep reinforcement learning agents for dynamic spectrum access in television whitespace cognitive radio networks en_US
dc.type Article en_US


Files in this item

This item appears in the following Collection(s)

Show simple item record