Learning Algorithms for Data Collection in RF-Charging IIoT Networks

Publication Name

IEEE Transactions on Industrial Informatics

Abstract

Data collection is a fundamental operation in energy harvesting industrial Internet of Things networks. To this end, we consider a hybrid access point (HAP) or controller that is responsible for charging and collecting L bits from sensor devices. The problem at hand is to optimize the transmit power allocation of the HAP over multiple time frames. The main challenge is that the HAP has causal channel state information to devices. In this article, we outline a novel two-step reinforcement learning with Gibbs sampling (TSRL-Gibbs) strategy, where the first step uses Q-learning and an action space comprising transmit power allocation sampled from a multidimensional simplex. The second step applies Gibbs sampling to further refine the action space. Our results show that TSRL-Gibbs requires up to 28.5% fewer frames than competing approaches.

Open Access Status

This publication is not available as open access

Volume

19

Issue

1

First Page

88

Last Page

97

Share

COinS
 

Link to publisher version (DOI)

http://dx.doi.org/10.1109/TII.2022.3178381