Inferring distributions over depth from a single image

Abstract

When building geometric scene understanding system for autonomous vehicles, it is crucial to know when the system might fail. Most contemporary approaches cast the problem as depth regression, whose output is a depth value for each pixel. Such approaches cannot diagnose when failures might occur. One attractive alternative is a deep Bayesian network, which captures uncertainty in both model parameters and ambiguous sensor measurements. However, estimating uncertainties is often slow and the distributions are often limited to be uni-modal. In this paper, we recast the continuous problem of depth regression as discrete binary classification, whose output is an un-normalized probabilistic distribution over possible depths for each pixel. Such output allows one to reliably and efficiently capture multi-modal depth distributions in ambiguous cases, such as depth discontinuitiesand reflective surfaces. Results on standard benchmarks show that our method produces accurate depth predictions and significantly better uncertainty estimations than prior arts while running near real-time. Finally, by making use of uncertainties of the predicted distribution, we significantly reduces streak-like artifacts, and improves accuracy as well as momory efficiency in 3D maps built with monocular depth estimation.

  • Paper
  • Code

  • Demo for dense mapping: