TY - JOUR
T1 - D-net
T2 - A generalised and optimised deep network for monocular depth estimation
AU - Thompson, Joshua Luke
AU - Phung, Son Lam
AU - Bouzerdoum, Abdesselam
N1 - Publisher Copyright:
© 2013 IEEE.
PY - 2021
Y1 - 2021
N2 - Depth estimation is an essential component in computer vision systems for achieving 3D scene understanding. Efficient and accurate depth map estimation has numerous applications including self-driving vehicles and virtual reality tools. This paper presents a new deep network, called D-Net, for depth estimation from a single RGB image. The proposed network can be trained end-to-end, and its structure can be customised to meet different requirements in model size, speed, and prediction accuracy. Our approach gathers strong global and local contextual features at multiple resolutions, and then transfers these to high resolutions for clearer depth maps. For the encoder backbone, D-Net can utilise many state-of-the-art models including EfficientNet, HRNet and Swin Transformer to obtain dense depth maps. The proposed D-net is designed to have minimal parameters and reduced computational complexity. Extensive evaluations on the NYUv2 and KITTI benchmark datasets show that our model is highly accurate across multiple backbones, and it achieves state-of-the-art performance on both benchmarks when combined with the Swin Transformer and HRNets.
AB - Depth estimation is an essential component in computer vision systems for achieving 3D scene understanding. Efficient and accurate depth map estimation has numerous applications including self-driving vehicles and virtual reality tools. This paper presents a new deep network, called D-Net, for depth estimation from a single RGB image. The proposed network can be trained end-to-end, and its structure can be customised to meet different requirements in model size, speed, and prediction accuracy. Our approach gathers strong global and local contextual features at multiple resolutions, and then transfers these to high resolutions for clearer depth maps. For the encoder backbone, D-Net can utilise many state-of-the-art models including EfficientNet, HRNet and Swin Transformer to obtain dense depth maps. The proposed D-net is designed to have minimal parameters and reduced computational complexity. Extensive evaluations on the NYUv2 and KITTI benchmark datasets show that our model is highly accurate across multiple backbones, and it achieves state-of-the-art performance on both benchmarks when combined with the Swin Transformer and HRNets.
KW - 3D scene understanding
KW - Assistive navigation
KW - Convolutional networks
KW - Deep learning
KW - Depth estimation
KW - Monocular vision
KW - Self-driving cars
KW - Vision transformer
UR - http://www.scopus.com/inward/record.url?scp=85116974992&partnerID=8YFLogxK
U2 - 10.1109/ACCESS.2021.3116380
DO - 10.1109/ACCESS.2021.3116380
M3 - Article
AN - SCOPUS:85116974992
SN - 2169-3536
VL - 9
SP - 134543
EP - 134555
JO - IEEE Access
JF - IEEE Access
ER -