PCTNet: depth estimation from single structured light image with a parallel CNN-transformer network

Publication Name

Measurement Science and Technology

Abstract

Recent approaches based on convolutional neural networks significantly improve the performance of structured light image depth estimation in fringe projection and speckle projection 3D measurement. However, it remains challenging to simultaneously preserve the global structure and local details of objects for the structured light images in complex scenes. In this paper, we design a parallel CNN-transformer network (PCTNet), which consists of a CNN branch, a transformer branch, a bidirectional feature fusion module (BFFM), and a cross-feature multi-scale fusion module (CFMS). The BFFM and CFMS modules are proposed to fuse local and global features of the double branches in order to achieve better depth estimation. Comprehensive experiments are conducted to evaluate our model on four structured light datasets, i.e. our established simulated fringe and speckle structured light datasets, and public real fringe and speckle structured light datasets. Experiments demonstrate that the proposed PCTNet is an effective architecture, achieving state-of-the-art performance in both qualitative and quantitative evaluation.

Open Access Status

This publication is not available as open access

Volume

34

Issue

8

Article Number

085402

Funding Number

2019KJ021

Funding Sponsor

National Natural Science Foundation of China

Share

COinS
 

Link to publisher version (DOI)

http://dx.doi.org/10.1088/1361-6501/acd136