yafeng19 / T-CORELinks
[CVPR 2025] PyTorch implementation of T-CORE, introduced in "When the Future Becomes the Past: Taming Temporal Correspondence for Self-supervised Video Representation Learning".
☆16Updated 6 months ago
Alternatives and similar repositories for T-CORE
Users that are interested in T-CORE are comparing it to the libraries listed below
Sorting:
- ☆16Updated last year
- Official code for CVPR2024 “VideoMAC: Video Masked Autoencoders Meet ConvNets”☆12Updated last year
- [NeurIPS 2023] The official implementation of SOC: Semantic-Assisted Object Cluster for Referring Video Object Segmentation☆33Updated last year
- [CVPR 2025] COSMOS: Cross-Modality Self-Distillation for Vision Language Pre-training☆31Updated 6 months ago
- [ECCV24] VISA: Reasoning Video Object Segmentation via Large Language Model☆192Updated last year
- [ICLR 2024] FROSTER: Frozen CLIP is a Strong Teacher for Open-Vocabulary Action Recognition☆90Updated 9 months ago
- The official implementation of our paper ''IteRPrimE: Zero-shot Referring Image Segmentation with Iterative Grad-CAM Refinement and Prima…☆14Updated 6 months ago
- A list of referring video object segmentation papers☆52Updated 4 months ago
- Official implementation of the CVPR'24 paper [Adaptive Slot Attention: Object Discovery with Dynamic Slot Number]☆57Updated 8 months ago
- [CVPR 2025] FLAIR: VLM with Fine-grained Language-informed Image Representations☆108Updated last month
- [NeurlPS 2024] One Token to Seg Them All: Language Instructed Reasoning Segmentation in Videos☆137Updated 9 months ago
- [ICLR 2025] TimeSuite: Improving MLLMs for Long Video Understanding via Grounded Tuning☆45Updated 6 months ago
- [ICCV 2025] Official PyTorch Code for "Advancing Textual Prompt Learning with Anchored Attributes"☆101Updated this week
- ☆59Updated last year
- [CVPR 2025 Highlight] Your Large Vision-Language Model Only Needs A Few Attention Heads For Visual Grounding☆39Updated last month
- The official PyTorch implementation of the CVPR 2023 paper "Contrastive Grouping with Transformer for Referring Image Segmentation".☆51Updated last year
- [ICCV-2023] The official code of Bridging Vision and Language Encoders: Parameter-Efficient Tuning for Referring Image Segmentation☆137Updated 3 months ago
- [ICLR 2024] Test-Time Adaptation with CLIP Reward for Zero-Shot Generalization in Vision-Language Models.☆93Updated last year
- [AAAI 2025] Open-vocabulary Video Instance Segmentation Codebase built upon Detectron2, which is really easy to use.☆24Updated 9 months ago
- FineCLIP: Self-distilled Region-based CLIP for Better Fine-grained Understanding (NIPS24)☆31Updated last month
- [CVPR 2025 🔥]A Large Multimodal Model for Pixel-Level Visual Grounding in Videos☆86Updated 6 months ago
- The official implementation of A Counting-Aware Hierarchical Decoding Framework for Generalized Referring Expression Segmentation☆23Updated 2 months ago
- [CVPR2025] Number it: Temporal Grounding Videos like Flipping Manga☆123Updated last week
- ☆24Updated 2 years ago
- [ICLR2024 Spotlight] Code Release of CLIPSelf: Vision Transformer Distills Itself for Open-Vocabulary Dense Prediction☆195Updated last year
- [ECCV24] VISA: Reasoning Video Object Segmentation via Large Language Model☆19Updated last year
- Official implementation of "Test-Time Zero-Shot Temporal Action Localization", CVPR 2024☆66Updated last year
- Self-Calibrated CLIP for Training-Free Open-Vocabulary Segmentation☆56Updated 4 months ago
- [ICCV 2023] Generative Prompt Model for Weakly Supervised Object Localization☆57Updated last year
- Universal Video Temporal Grounding with Generative Multi-modal Large Language Models☆29Updated 3 months ago