PlaneNet: Piece-wise Planar Reconstruction from a Single RGB Image

Teaser

Figure 1: This paper proposes a deep neural architecture for piece-wise planar depthmap reconstruction from a single RGB image. From left to right, an input image, a piece-wise planar segmentation, a reconstructed depthmap, and a texture-mapped 3D model.


Abstract

This paper proposes a deep neural network (DNN) for piece-wise planar depthmap reconstruction from a single RGB image. While DNNs have brought remarkable progress to single-image depth prediction, piece-wise planar depthmap reconstruction requires a structured geometry representation, and has been a difficult task to master even for DNNs. The proposed end-to-end DNN learns to directly infer a set of plane parameters and corresponding plane segmentation masks from a single RGB image. We have generated more than 50,000 piece-wise planar depthmaps for training and testing from ScanNet, a large-scale RGBD video database. Our qualitative and quantitative evaluations demonstrate that the proposed approach outperforms baseline methods in terms of both plane segmentation and depth estimation accuracy. To the best of our knowledge, this paper presents the first end-to-end neural architecture for piece-wise planar reconstruction from a single RGB image.


Video


Paper

PlaneNet: Piece-wise Planar Reconstruction from a Single RGB Image
In Computer Vision and Pattern Recognition (CVPR), 2018
[Arxiv] [PDF] [Supp.] [Code and Data]