mlfoundations / wise-ftLinks
Robust fine-tuning of zero-shot models
☆725Updated 3 years ago
Alternatives and similar repositories for wise-ft
Users that are interested in wise-ft are comparing it to the libraries listed below
Sorting:
- CLIP-like model evaluation☆748Updated 2 weeks ago
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch☆1,164Updated last year
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.☆706Updated 3 years ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆664Updated 2 years ago
- [ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decode…☆862Updated last year
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆779Updated last year
- Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch☆1,256Updated 2 years ago
- Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time☆477Updated last year
- ☆634Updated last year
- A method to increase the speed and lower the memory footprint of existing vision transformers.☆1,082Updated last year
- Code release for SLIP Self-supervision meets Language-Image Pre-training☆774Updated 2 years ago
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,206Updated last year
- Grounded Language-Image Pre-training☆2,474Updated last year
- iBOT : Image BERT Pre-Training with Online Tokenizer (ICLR 2022)☆739Updated 3 years ago
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆402Updated last year
- DataComp: In search of the next generation of multimodal datasets☆729Updated 3 months ago
- GIT: A Generative Image-to-text Transformer for Vision and Language☆572Updated last year
- Pix2Seq codebase: multi-tasks with generative modeling (autoregressive and diffusion)☆920Updated last year
- ☆543Updated 3 years ago
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expert…☆1,624Updated this week
- A collection of papers on the topic of ``Computer Vision in the Wild (CVinW)''☆1,322Updated last year
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.☆1,638Updated this week
- Official Pytorch Implementation of: "ImageNet-21K Pretraining for the Masses"(NeurIPS, 2021) paper☆768Updated 2 years ago
- Official Open Source code for "Scaling Language-Image Pre-training via Masking"☆426Updated 2 years ago
- EVA Series: Visual Representation Fantasies from BAAI☆2,547Updated last year
- ❄️🔥 Visual Prompt Tuning [ECCV 2022] https://arxiv.org/abs/2203.12119☆1,143Updated last year
- Official PyTorch implementation of "ML-Decoder: Scalable and Versatile Classification Head" (2021)☆343Updated 2 years ago
- A concise but complete implementation of CLIP with various experimental improvements from recent papers☆711Updated last year
- Official PyTorch implementation of GroupViT: Semantic Segmentation Emerges from Text Supervision, CVPR 2022.☆768Updated 3 years ago
- OpenAI CLIP text encoders for multiple languages!☆809Updated 2 years ago