lnairGT / CLIP-DistillationLinks
Knowledge Distillation using Contrastive Language-Image Pretraining (CLIP) without a teacher model.
☆18Updated last year
Alternatives and similar repositories for CLIP-Distillation
Users that are interested in CLIP-Distillation are comparing it to the libraries listed below
Sorting:
- Evaluation and dataset construction code for the CVPR 2025 paper "Vision-Language Models Do Not Understand Negation"☆44Updated 9 months ago
- Official implementation of CVPR 2024 paper "Retrieval-Augmented Open-Vocabulary Object Detection".☆44Updated last year
- [ICCV 2023] ViLLA: Fine-grained vision-language representation learning from real-world data☆46Updated 2 years ago
- Data-Efficient Multimodal Fusion on a Single GPU☆68Updated last year
- 🔥MixPro: Data Augmentation with MaskMix and Progressive Attention Labeling for Vision Transformer [Official, ICLR 2023]☆22Updated 2 years ago
- Distribution-Aware Prompt Tuning for Vision-Language Models (ICCV 2023)☆44Updated 2 years ago
- S-CLIP: Semi-supervised Vision-Language Pre-training using Few Specialist Captions☆50Updated 2 years ago
- The efficient tuning method for VLMs☆80Updated last year
- SVL-Adapter: Self-Supervised Adapter for Vision-Language Pretrained Models☆21Updated 2 years ago
- [CVPR 2024] Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Fine-grained Understanding☆55Updated 10 months ago
- [ICLR 23] Contrastive Aligned of Vision to Language Through Parameter-Efficient Transfer Learning☆40Updated 2 years ago
- Code and results accompanying our paper titled CHiLS: Zero-Shot Image Classification with Hierarchical Label Sets☆59Updated 2 years ago
- Official pytorch implementation of NeurIPS 2022 paper, TokenMixup☆48Updated 3 years ago
- code for "Multitask Vision-Language Prompt Tuning" https://arxiv.org/abs/2211.11720☆56Updated last year
- ☆45Updated 4 months ago
- visual question answering prompting recipes for large vision-language models☆28Updated last year
- Implementation of "VL-Mamba: Exploring State Space Models for Multimodal Learning"☆86Updated last year
- Code for ACL 2023 Oral Paper: ManagerTower: Aggregating the Insights of Uni-Modal Experts for Vision-Language Representation Learning☆12Updated 5 months ago
- Official Implementation of Attentive Mask CLIP (ICCV2023, https://arxiv.org/abs/2212.08653)☆34Updated last year
- [CVPR 2024] Improving language-visual pretraining efficiency by perform cluster-based masking on images.☆31Updated last year
- ☆29Updated 2 years ago
- The official implementation of the paper "MMFuser: Multimodal Multi-Layer Feature Fuser for Fine-Grained Vision-Language Understanding". …☆62Updated last year
- [NeurIPS 2024] Official PyTorch implementation of "Improving Compositional Reasoning of CLIP via Synthetic Vision-Language Negatives"☆46Updated last year
- [ACL 2023] PuMer: Pruning and Merging Tokens for Efficient Vision Language Models☆36Updated last year
- [CVPR'24] Validation-free few-shot adaptation of CLIP, using a well-initialized Linear Probe (ZSLP) and class-adaptive constraints (CLAP)…☆80Updated 8 months ago
- [NeurIPS 2023] Meta-Adapter☆48Updated 2 years ago
- 📍 Official repository of paper "ProtoCLIP: Prototypical Contrastive Language Image Pretraining" (IEEE TNNLS 2023)☆55Updated 2 years ago
- [CVPR2025] VideoICL: Confidence-based Iterative In-context Learning for Out-of-Distribution Video Understanding☆24Updated 10 months ago
- [ICLR 2023] Official code repository for "Meta Learning to Bridge Vision and Language Models for Multimodal Few-Shot Learning"☆60Updated 2 years ago
- This is Pytorch implementation of our paper "LF-ViT: Reducing Spatial Redundancy in Vision Transformer for Efficient Image Recognition".☆11Updated last year