wudongming97 / OnlineReferView external linksLinks
[ICCV 2023] OnlineRefer: A Simple Online Baseline for Referring Video Object Segmentation
☆57Oct 7, 2023Updated 2 years ago
Alternatives and similar repositories for OnlineRefer
Users that are interested in OnlineRefer are comparing it to the libraries listed below
Sorting:
- Robust Referring Video Object Segmentation with Cyclic Structural Consistency [ICCV 2023]☆30Mar 13, 2024Updated last year
- [ICCV 2023] Spectrum-guided Multi-granularity Referring Video Object Segmentation.☆111Apr 9, 2025Updated 10 months ago
- Referring Video Object Segmentation / Multi-Object Tracking Repo☆90Jul 27, 2023Updated 2 years ago
- [ICCV 2023 Workshop] The Official Implementation of The First Prize Solution for RVOS Competition☆14Jan 1, 2024Updated 2 years ago
- [TCSVT 2024] Temporally Consistent Referring Video Object Segmentation with Hybrid Memory☆19Apr 9, 2025Updated 10 months ago
- [CVPR 2024] LoSh: Long-Short Text Joint Prediction Network for Referring Video Object Segmentation☆13Jun 17, 2024Updated last year
- [CVPR2022] Official Implementation of ReferFormer☆352Feb 15, 2025Updated last year
- [ICCV 2023] CTVIS: Consistent Training for Online Video Instance Segmentation☆80Oct 15, 2023Updated 2 years ago
- Wnet: Audio-Guided Video Object Segmentation via Wavelet-Based Cross-Modal Denoising Networks☆24Sep 6, 2022Updated 3 years ago
- Code for the paper "Exploring Pre-trained Text-to-Video Diffusion Models for Referring Video Object Segmentation", ECCV 2024☆47Sep 28, 2024Updated last year
- (ICCV 2023) Betrayed by Captions: Joint Caption Grounding and Generation for Open Vocabulary Instance Segmentation☆48Jul 18, 2024Updated last year
- [NeurIPS 2023] The official implementation of SOC: Semantic-Assisted Object Cluster for Referring Video Object Segmentation☆33Mar 16, 2024Updated last year
- The benchmark for "Video Object Segmentation in Panoptic Wild Scenes".☆12Oct 17, 2023Updated 2 years ago
- ☆15May 25, 2024Updated last year
- Referring Image Segmentation Benchmarking with Segment Anything Model (SAM)