deepglint / ALIP
[ICCV 2023] ALIP: Adaptive Language-Image Pre-training with Synthetic Caption
☆98Updated last year
Alternatives and similar repositories for ALIP:
Users that are interested in ALIP are comparing it to the libraries listed below
- ☆30Updated last year
- [CVPR2024] The code of "UniPT: Universal Parallel Tuning for Transfer Learning with Efficient Parameter and Memory"☆68Updated 6 months ago
- Code for the paper: "SuS-X: Training-Free Name-Only Transfer of Vision-Language Models" [ICCV'23]☆99Updated last year
- Official Codes for Fine-Grained Visual Prompting, NeurIPS 2023☆50Updated last year
- Task Residual for Tuning Vision-Language Models (CVPR 2023)☆72Updated last year
- SeqTR: A Simple yet Universal Network for Visual Grounding☆134Updated 6 months ago
- ☆113Updated last year
- [ECCV 2024] Official PyTorch implementation of DreamLIP: Language-Image Pre-training with Long Captions☆129Updated 5 months ago
- [ICCV 2023] Code for "Not All Features Matter: Enhancing Few-shot CLIP with Adaptive Prior Refinement"☆145Updated last year
- [CVPR2023] The code for 《Position-guided Text Prompt for Vision-Language Pre-training》☆152Updated last year
- ☆79Updated last year
- 📍 Official pytorch implementation of paper "ProtoCLIP: Prototypical Contrastive Language Image Pretraining" (IEEE TNNLS)☆52Updated last year
- [AAAI 2024] TagCLIP: A Local-to-Global Framework to Enhance Open-Vocabulary Multi-Label Classification of CLIP Without Training☆84Updated last year
- ☆91Updated last year
- [CVPR2025] Code Release of F-LMM: Grounding Frozen Large Multimodal Models☆84Updated 9 months ago
- [ICCV 2023] Generative Prompt Model for Weakly Supervised Object Localization☆57Updated last year
- [CVPR 2024] Offical implemention of the paper "DePT: Decoupled Prompt Tuning"☆98Updated 5 months ago
- The official repository for ICLR2024 paper "FROSTER: Frozen CLIP is a Strong Teacher for Open-Vocabulary Action Recognition"☆79Updated 3 months ago
- Offical PyTorch implementation of Clover: Towards A Unified Video-Language Alignment and Fusion Model (CVPR2023)☆41Updated 2 years ago
- [ICLR2024 Spotlight] Code Release of CLIPSelf: Vision Transformer Distills Itself for Open-Vocabulary Dense Prediction☆186Updated last year
- [ICLR2024] Exploring Target Representations for Masked Autoencoders☆55Updated last year
- [CVPR2023] Code Release of Aligning Bag of Regions for Open-Vocabulary Object Detection☆182Updated last year
- Awesome Vision-Language Pretraining Papers☆30Updated 3 months ago
- Composed Video Retrieval☆55Updated last year
- Official repository for "Vita-CLIP: Video and text adaptive CLIP via Multimodal Prompting" [CVPR 2023]☆116Updated last year
- 【ICLR 2024, Spotlight】Sentence-level Prompts Benefit Composed Image Retrieval☆82Updated last year
- ☆61Updated last year
- Official code for ICCV 2023 paper, "Improving Zero-Shot Generalization for CLIP with Synthesized Prompts"☆100Updated last year
- Pink: Unveiling the Power of Referential Comprehension for Multi-modal LLMs☆90Updated 3 months ago
- Source code of our CVPR2024 paper TeachCLIP for Text-to-Video Retrieval☆31Updated 2 months ago