zwq456 / CLIP-VISLinks
[IEEE TCSVT] Official Pytorch Implementation of CLIP-VIS: Adapting CLIP for Open-Vocabulary Video Instance Segmentation.
☆46Updated last year
Alternatives and similar repositories for CLIP-VIS
Users that are interested in CLIP-VIS are comparing it to the libraries listed below
Sorting:
- [ECCV 2024] Code for Betrayed by Attention: A Simple yet Effective Approach for Self-supervised Video Object Segmentation☆34Updated 10 months ago
- 「AAAI 2024」 Referred by Multi-Modality: A Unified Temporal Transformers for Video Object Segmentation☆82Updated 7 months ago
- [CVPR 2024] The repository contains the official implementation of "Open-Vocabulary Segmentation with Semantic-Assisted Calibration"☆75Updated last year
- Official Repo for PosSAM: Panoptic Open-vocabulary Segment Anything☆70Updated last year
- [ECCV2024] ClearCLIP: Decomposing CLIP Representations for Dense Vision-Language Inference☆96Updated 9 months ago
- Code for the paper "Exploring Pre-trained Text-to-Video Diffusion Models for Referring Video Object Segmentation", ECCV 2024☆45Updated last year
- [ECCV2024] PartGLEE: A Foundation Model for Recognizing and Parsing Any Objects☆57Updated last year
- ☆59Updated last year
- Large-Vocabulary Video Instance Segmentation dataset☆95Updated last year
- ☆34Updated last month
- [ECCV2024] ProxyCLIP: Proxy Attention Improves CLIP for Open-Vocabulary Segmentation☆111Updated 9 months ago
- ☆133Updated last year
- [CVPR 2025] DeCLIP: Decoupled Learning for Open-Vocabulary Dense Perception☆148Updated last week
- [CVPR 2024 Challenge] 1st Place Solution for MeViS Track in CVPR 2024 PVUW Workshop: Motion Expression guided Video Segmentation☆32Updated last year
- [NeurIPS 2024] Understanding Multi-Granularity for Open-Vocabulary Part Segmentation