CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet
☆224Dec 16, 2022Updated 3 years ago
Alternatives and similar repositories for FT-CLIP
Users that are interested in FT-CLIP are comparing it to the libraries listed below
Sorting:
- code release of research paper "Exploring Long-Sequence Masked Autoencoders"☆100Oct 14, 2022Updated 3 years ago
- Robust fine-tuning of zero-shot models☆760Apr 29, 2022Updated 3 years ago
- ☆574Jul 19, 2022Updated 3 years ago
- [NeurIPS 2023] Text data, code and pre-trained models for paper "Improving CLIP Training with Language Rewrites"☆289Jan 14, 2024Updated 2 years ago
- This is an official implementation for "SimMIM: A Simple Framework for Masked Image Modeling".