Qinying-Liu / TagAlignView external linksLinks
Official implementation of TagAlign
☆35Dec 11, 2024Updated last year
Alternatives and similar repositories for TagAlign
Users that are interested in TagAlign are comparing it to the libraries listed below
Sorting:
- [CVPR'23] A Simple Framework for Text-Supervised Semantic Segmentation☆59Jan 26, 2025Updated last year
- OVSegmentor, CVPR23☆60Apr 22, 2024Updated last year
- [ECCV 2024] Official PyTorch implementation of DreamLIP: Language-Image Pre-training with Long Captions☆138May 8, 2025Updated 9 months ago
- [ICLR2024 Spotlight] Code Release of CLIPSelf: Vision Transformer Distills Itself for Open-Vocabulary Dense Prediction☆201Feb 5, 2024Updated 2 years ago
- ☆20Oct 19, 2023Updated 2 years ago
- Official implementation of SCLIP: Rethinking Self-Attention for Dense Vision-Language Inference☆180Oct 10, 2024Updated last year
- ☆25Nov 22, 2024Updated last year
- ☆12Jul 21, 2022Updated 3 years ago
- ☆58Aug 7, 2023Updated 2 years ago
- [NLPCC'23] ZeroGen: Zero-shot Multimodal Controllable Text Generation with Multiple Oracles PyTorch Implementation☆14Oct 7, 2023Updated 2 years ago
- Official Implementation of Attentive Mask CLIP (ICCV2023, https://arxiv.org/abs/2212.08653)☆34May 29, 2024Updated last year
- NegCLIP.☆38Feb 6, 2023Updated 3 years ago
- ☆17Dec 13, 2023Updated 2 years ago
- [ICCV 2023] ALIP: Adaptive Language-Image Pre-training with Synthetic Caption☆104Sep 18, 2023Updated 2 years ago
- Pytorch code for paper From CLIP to DINO: Visual Encoders Shout in Multi-modal Large Language Models☆206Jan 8, 2025Updated last year
- [Pattern Recognition 25] CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks☆462Mar 1, 2025Updated 11 months ago
- DenseFusion-1M: Merging Vision Experts for Comprehensive Multimodal Perception☆159Dec 6, 2024Updated last year
- [CVPR 2024 CVinW] Multi-Agent VQA: Exploring Multi-Agent Foundation Models on Zero-Shot Visual Question Answering☆20Sep 21, 2024Updated last year
- [NeurIPS 2024] Classification Done Right for Vision-Language Pre-Training☆227Mar 20, 2025Updated 10 months ago
- This is an official implementation of GRIT-VLP☆20Aug 8, 2022Updated 3 years ago
- ☆45Oct 3, 2023Updated 2 years ago
- Official repository for CoMM Dataset☆49Dec 31, 2024Updated last year
- Official implementation for the paper "Prompt Pre-Training with Over Twenty-Thousand Classes for Open-Vocabulary Visual Recognition"☆259May 3, 2024Updated last year
- ☆24Apr 17, 2024Updated last year
- ☆19Dec 6, 2023Updated 2 years ago
- Official implementation of 'CLIP-DINOiser: Teaching CLIP a few DINO tricks' paper.☆274Oct 26, 2024Updated last year
- ☆22Dec 11, 2024Updated last year
- MATE: Masked Autoencoders are Online 3D Test-Time Learners (ICCV 2023)☆22Jul 22, 2023Updated 2 years ago
- Detail-Oriented CLIP for Fine-Grained Tasks (ICLR SSI-FM 2025)☆57Mar 26, 2025Updated 10 months ago
- [NeurIPS 2023] Bootstrapping Vision-Language Learning with Decoupled Language Pre-training☆26Dec 5, 2023Updated 2 years ago
- [NeurIPS 2023] Rewrite Caption Semantics: Bridging Semantic Gaps for Language-Supervised Semantic Segmentation☆20Jan 3, 2024Updated 2 years ago
- 📍 Official repository of paper "ProtoCLIP: Prototypical Contrastive Language Image Pretraining" (IEEE TNNLS 2023)☆55Nov 8, 2023Updated 2 years ago
- ☆50Oct 29, 2023Updated 2 years ago
- [CVPR 2024] CapsFusion: Rethinking Image-Text Data at Scale☆213Feb 27, 2024Updated last year
- Official code repo of PIN: Positional Insert Unlocks Object Localisation Abilities in VLMs☆26Jan 14, 2025Updated last year
- This is the official code of "Uncovering Prototypical Knowledge for Weakly Open-Vocabulary Semantic Segmentation, NeurIPS 23"☆26Dec 7, 2023Updated 2 years ago
- [ECCV2024] ProxyCLIP: Proxy Attention Improves CLIP for Open-Vocabulary Segmentation☆111Mar 26, 2025Updated 10 months ago
- ViCToR: Improving Visual Comprehension via Token Reconstruction for Pretraining LMMs☆28Aug 15, 2025Updated 5 months ago
- The official implementation of ECCV2024 paper "Facial Affective Behavior Analysis with Instruction Tuning"☆29Jan 8, 2025Updated last year