University of Wollongong
Browse

PCTNet: depth estimation from single structured light image with a parallel CNN-transformer network

journal contribution
posted on 2024-11-17, 15:42 authored by Xinjun Zhu, Zhiqiang Han, Zhizhi Zhang, Limei Song, Hongyi Wang, Qinghua Guo
Recent approaches based on convolutional neural networks significantly improve the performance of structured light image depth estimation in fringe projection and speckle projection 3D measurement. However, it remains challenging to simultaneously preserve the global structure and local details of objects for the structured light images in complex scenes. In this paper, we design a parallel CNN-transformer network (PCTNet), which consists of a CNN branch, a transformer branch, a bidirectional feature fusion module (BFFM), and a cross-feature multi-scale fusion module (CFMS). The BFFM and CFMS modules are proposed to fuse local and global features of the double branches in order to achieve better depth estimation. Comprehensive experiments are conducted to evaluate our model on four structured light datasets, i.e. our established simulated fringe and speckle structured light datasets, and public real fringe and speckle structured light datasets. Experiments demonstrate that the proposed PCTNet is an effective architecture, achieving state-of-the-art performance in both qualitative and quantitative evaluation.

Funding

National Natural Science Foundation of China (2019KJ021)

History

Journal title

Measurement Science and Technology

Volume

34

Issue

8

Language

English

Usage metrics

    Categories

    No categories selected

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC