Anisotropic Convolutional Neural Networks for RGB-D based Semantic Scene Completion

Publication Name

IEEE Transactions on Pattern Analysis and Machine Intelligence

Abstract

Semantic Scene Completion (SSC) is a computer vision task aiming to simultaneously infer the occupancy and semantic labels for each voxel in a scene from partial information consisting of a depth image and/or a RGB image. As a voxel-wise labeling task, the key for SSC is how to effectively model the visual and geometrical variations to complete the scene. To this end, we propose the Anisotropic Network, with novel convolutional modules that can model varying anisotropic receptive fields voxel-wisely in a computationally efficient manner. The basic idea to achieve such anisotropy is to decompose 3D convolution into consecutive dimensional convolutions, and determine the dimension-wise kernels on the fly. One module, termed kernel-selection anisotropic convolution, adaptively selects the optimal kernel for each dimensional convolution from a set of candidate kernels, and the other module, termed kernel-modulation anisotropic convolution, modulates a single kernel for each dimension to derive more flexible receptive field. By stacking multiple such modules, the 3D context modeling capability and flexibility can be further enhanced. Moreover, we present a new end-to-end trainable framework to approach the SSC task avoiding the expensive TSDF pre-processing as in existing methods. Extensive experiments on SSC benchmarks show the advantage of the proposed methods.

Open Access Status

This publication is not available as open access

Share

COinS
 

Link to publisher version (DOI)

http://dx.doi.org/10.1109/TPAMI.2021.3081499