Deep Learning-Based Dictionary Construction for MIMO Radar Detection in Complex Scenes

Publication Name

IEEE Internet of Things Journal

Abstract

Conventional sparse representation methods cannot effectively characterize the nonlinear effects caused by non-ideal space-time factors of multiple-input multiple-output (MIMO) radar system and scenes with complex non-uniform clutter. In addition, a single dictionary is employed for both target and clutter representation, making their separability rather low, thus leading to the degradation of target detection performance. In this paper, we propose a deep learning-based dictionary construction approach to achieve dictionaries of target and clutter with high separability and excellent nonlinear correction ability, where the nonlinear characteristics of the received signal are effectively represented and corrected using a complex-valued convolutional autoencoder (CVCAE) network. With the criteria of minimizing the reconstruction and nonlinear correction errors as well as the correlation of target and clutter dictionaries, we jointly learn a CVCAE-based nonlinear correction model for the received signal with nonlinearity and sparse representation for target and clutter in the corrected linear space. An iterative algorithm is proposed to jointly search the solutions to the resultant optimization issue. To acquire the optimal complete dictionaries, an allocation model of the transceiving space-time resources is constructed under the least squares (LS) criterion and tackled using convex optimization. Extensive experimental results conducted on the measured Mountain-Top dataset demonstrate the effectiveness and superiority of the proposed method compared to state-of-the-art methods.

Open Access Status

This publication is not available as open access

Share

COinS
 

Link to publisher version (DOI)

http://dx.doi.org/10.1109/JIOT.2023.3332867