Robust fine-tuning of zero-shot models
☆760Apr 29, 2022Updated 3 years ago
Alternatives and similar repositories for wise-ft
Users that are interested in wise-ft are comparing it to the libraries listed below
Sorting:
- An open source implementation of CLIP.☆13,430Updated this week
- Easily compute clip embeddings and build a clip retrieval system with them☆2,732Aug 15, 2025Updated 6 months ago
- ☆574Jul 19, 2022Updated 3 years ago
- Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time☆507Jul 15, 2024Updated last year
- CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet☆224Dec 16, 2022Updated 3 years ago
- Code for Finetune like you pretrain: Improved finetuning of zero-shot vision models☆105Aug 13, 2023Updated 2 years ago
- Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)☆2,182May 20, 2024Updated last year
- [NeurIPS 2023] This repository includes the official implementation of our paper "An Inverse Scaling Law for CLIP Training"☆319Jun 3, 2024Updated last year
- ☆29Oct 18, 2022Updated 3 years ago
- CLIP-like model evaluation☆802Jan 15, 2026Updated last month
- NeurIPS 2025 Spotlight; ICLR2024 Spotlight; CVPR 2024; EMNLP 2024☆1,815Nov 27, 2025Updated 3 months ago
- Patching open-vocabulary models by interpolating weights☆91Sep 28, 2023Updated 2 years ago
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.☆718Apr 15, 2022Updated 3 years ago
- Grounded Language-Image Pre-training☆2,575Jan 24, 2024Updated 2 years ago
- DataComp: In search of the next generation of multimodal datasets☆772Apr 28, 2025Updated 10 months ago
- ☆200May 10, 2023Updated 2 years ago
- [NeurIPS 2023] Text data, code and pre-trained models for paper "Improving CLIP Training with Language Rewrites"☆289Jan 14, 2024Updated 2 years ago
- Easily turn large sets of image urls to an image dataset. Can download, resize and package 100M urls in 20h on one machine.☆4,371Oct 19, 2025Updated 4 months ago
- PyTorch code for BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation☆5,681Aug 5, 2024Updated last year
- ☆661Nov 28, 2023Updated 2 years ago
- CLIP (Contrastive Language-Image Pretraining), Predict the most relevant text snippet given an image☆32,642Feb 18, 2026Updated 2 weeks ago
- An official PyTorch implementation for CLIPPR☆30Jul 22, 2023Updated 2 years ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆675Sep 19, 2022Updated 3 years ago
- Code release for SLIP Self-supervision meets Language-Image Pre-training☆787Feb 9, 2023Updated 3 years ago
- EVA Series: Visual Representation Fantasies from BAAI☆2,647Aug 1, 2024Updated last year
- [CVPR 2023] Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners☆381Jun 1, 2023Updated 2 years ago
- Experiments and data for the paper "When and why vision-language models behave like bags-of-words, and what to do about it?" Oral @ ICLR …☆292Jun 7, 2023Updated 2 years ago
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training☆141Dec 16, 2025Updated 2 months ago
- LAVIS - A One-stop Library for Language-Vision Intelligence☆11,167Nov 18, 2024Updated last year
- Editing Models with Task Arithmetic☆535Jan 11, 2024Updated 2 years ago
- Code release for "Detecting Twenty-thousand Classes using Image-level Supervision".☆1,999Mar 21, 2024Updated last year
- Official codebase used to develop Vision Transformer, SigLIP, MLP-Mixer, LiT and more.☆3,371May 19, 2025Updated 9 months ago
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆407Nov 10, 2023Updated 2 years ago
- An open-source framework for training large multimodal models.☆4,071Aug 31, 2024Updated last year
- Code for T-MARS data filtering☆35Aug 23, 2023Updated 2 years ago
- VISSL is FAIR's library of extensible, modular and scalable components for SOTA Self-Supervised Learning with images.☆3,295Mar 3, 2024Updated 2 years ago
- Model Stock: All we need is just a few fine-tuned models☆129Aug 9, 2025Updated 6 months ago
- ICCV 2023: CLIPN for Zero-Shot OOD Detection: Teaching CLIP to Say No☆142Dec 2, 2023Updated 2 years ago
- Code and results accompanying our paper titled RLSbench: Domain Adaptation under Relaxed Label Shift☆35Jul 19, 2023Updated 2 years ago