Upgrading Optical Flow to 3D Scene Flow through Optical Expansion

CVPR 2020

Gengshan Yang1 Deva Ramanan1,2
1Robotics Institute, Carnegie Mellon University
2Argo AI


Optical flow vs optical expansion, where white indicates larger expansion / motion towards the camera. We upgrade optical flow to 3D scene flow using dense optical expansion, which reveals changes in depth and can be reliably inferred from two frames.

Abstract

We propose an approach of upgrading 2D optical flow to 3D scene flow. Our key insight is that dense optical expansion -- which can be reliably inferred from frame pairs -- reveals changes in depth of scene elements, e.g., things moving closer will get bigger. When integrated with camera intrinsics, optical expansion can be converted into a normalized 3D scene flow vector that provides meaningful directions of 3D movement, but not their magnitude (due to an underlying scale ambiguity). We show that dense optical expansion between two views can be learned from annotated optical flow maps or unlabeled video sequences, and applied to a variety of dynamic 3D perception tasks including monocular scene flow, LiDAR scene flow, and time-to-collision estimation, often demonstrating significant improvement over prior art.

[Paper] [5min oral] [Slides] [Bibtex]

1min Video

github

Code

Code is available here.

Acknowledgments

This work was supported by the CMU Argo AI Center for Autonomous Vehicle Research. Also thanks to Chaoyang Wang and Peiyun Hu for insightful discussions, and many friends at CMU for valuable suggestions. The teaser video is inspired by "Unsupervised Moving Object Detection via Contextual Information Separation, CVPR 2019."

Webpage design borrowed from Peiyun Hu