nju-ee / MODE-2022Links
This repository contains the source code for our paper: MODE: Multi-view Omnidirectional Depth Estimation with 360-degree Cameras. ECCV 2022
☆38Updated 2 years ago
Alternatives and similar repositories for MODE-2022
Users that are interested in MODE-2022 are comparing it to the libraries listed below
Sorting:
- RM-Depth: Unsupervised Learning of Recurrent Monocular Depth in Dynamic Scenes, CVPR 2022☆41Updated 2 years ago
- ECCV'2022 PanoFormer: Panorama Transformer for Indoor 360° Depth Estimation☆129Updated 7 months ago
- [CVPR 2023] Rethinking Optical Flow from Geometric Matching Consistent Perspective☆99Updated 2 years ago
- [IROS 2023] TemporalStereo: Efficient Spatial-Temporal Stereo Matching Network☆69Updated 2 years ago
- [CVPR 2023] Multi-frame depth estimation in dynamic scenes. -- Li, Rui, et al. "Learning to Fuse Monocular and Multi-view Cues for Multi-…☆131Updated 2 years ago
- [CVPR 2022 Oral] Official Pytorch Implementation of "OmniFusion: 360 Monocular Depth Estimation via Geometry-Aware Fusion"☆104Updated 2 years ago
- ☆60Updated 2 years ago
- VA-DepthNet: A Variational Approach to Single Image Depth Prediction