alinlab / s-clipLinks
S-CLIP: Semi-supervised Vision-Language Pre-training using Few Specialist Captions
☆49Updated 2 years ago
Alternatives and similar repositories for s-clip
Users that are interested in s-clip are comparing it to the libraries listed below
Sorting:
- [CVPR 2023] Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners☆43Updated 2 years ago
- Code for the paper: "SuS-X: Training-Free Name-Only Transfer of Vision-Language Models" [ICCV'23]☆105Updated 2 years ago
- [ICLR 2024] Test-Time RL with CLIP Feedback for Vision-Language Models.☆95Updated last month
- This repo is the official implementation of UPL (Unsupervised Prompt Learning for Vision-Language Models).☆117Updated 3 years ago
- 【ICCV 2023】Diverse Data Augmentation with Diffusions for Effective Test-time Prompt Tuning & 【IJCV 2025】Diffusion-Enhanced Test-time Adap…☆68Updated 10 months ago
- SVL-Adapter: Self-Supervised Adapter for Vision-Language Pretrained Models☆20Updated last year
- [ICLR 2023] Official code repository for "Meta Learning to Bridge Vision and Language Models for Multimodal Few-Shot Learning"☆60Updated 2 years ago
- [CVPR 2024] Contrasting Intra-Modal and Ranking Cross-Modal Hard Negatives to Enhance Visio-Linguistic Fine-grained Understanding