microsoft / A-CLIP
Official Implementation of Attentive Mask CLIP (ICCV2023, https://arxiv.org/abs/2212.08653)
☆29Updated 9 months ago
Alternatives and similar repositories for A-CLIP:
Users that are interested in A-CLIP are comparing it to the libraries listed below
- [CVPR2025] Code Release of F-LMM: Grounding Frozen Large Multimodal Models☆76Updated 7 months ago
- The official implementation of the paper "MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding". …☆51Updated 4 months ago
- Official Codes for Fine-Grained Visual Prompting, NeurIPS 2023☆51Updated last year
- Pytorch Implementation for CVPR 2024 paper: Learn to Rectify the Bias of CLIP for Unsupervised Semantic Segmentation☆36Updated last month
- [CVPR2024] The code of "UniPT: Universal Parallel Tuning for Transfer Learning with Efficient Parameter and Memory"☆67Updated 5 months ago
- Official implementation of TagAlign☆34Updated 3 months ago
- [ECCV2024] ProxyCLIP: Proxy Attention Improves CLIP for Open-Vocabulary Segmentation☆78Updated last month
- ☆77Updated last year
- [ICLR 2023] Masked Frequency Modeling for Self-Supervised Visual Pre-Training☆74Updated last year
- ☆20Updated last year
- [ICCV 2023] ALIP: Adaptive Language-Image Pre-training with Synthetic Caption☆97Updated last year
- Implementation of "VL-Mamba: Exploring State Space Models for Multimodal Learning"☆81Updated last year
- [CVPR 2023] Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners☆41Updated last year
- cliptrase☆33Updated 6 months ago
- [CVPR 2023]Implementation of Siamese Image Modeling for Self-Supervised Vision Representation Learning☆35Updated 9 months ago
- [CVPR 2025] Official Pytorch codebase for paper: "Assessing and Learning Alignment of Unimodal Vision and Language Models"☆30Updated last month
- 📍 Official pytorch implementation of paper "ProtoCLIP: Prototypical Contrastive Language Image Pretraining" (IEEE TNNLS)☆52Updated last year
- ☆29Updated 6 months ago
- [ICLR 2024] Test-Time Adaptation with CLIP Reward for Zero-Shot Generalization in Vision-Language Models.☆73Updated 8 months ago
- Code for the paper: "SuS-X: Training-Free Name-Only Transfer of Vision-Language Models" [ICCV'23]☆97Updated last year
- [ICLR'24] Consistency-guided Prompt Learning for Vision-Language Models☆67Updated 10 months ago
- Official Implementation of "Read-only Prompt Optimization for Vision-Language Few-shot Learning", ICCV 2023☆53Updated last year
- ☆52Updated 2 years ago
- [ECCV2024] ClearCLIP: Decomposing CLIP Representations for Dense Vision-Language Inference☆77Updated this week
- Official Pytorch codebase for Open-Vocabulary Instance Segmentation without Manual Mask Annotations [CVPR 2023]☆50Updated 2 months ago
- [NeurIPS 2024] Official PyTorch implementation of "Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives"☆37Updated 3 months ago
- CVPR2024: Dual Memory Networks: A Versatile Adaptation Approach for Vision-Language Models☆69Updated 8 months ago
- ☆23Updated 2 years ago
- [CVPR'23] Hard Patches Mining for Masked Image Modeling☆90Updated last year
- Code for Label Propagation for Zero-shot Classification with Vision-Language Models (CVPR2024)☆36Updated 8 months ago