mlfoundations / wise-ft
Robust fine-tuning of zero-shot models
☆695Updated 2 years ago
Alternatives and similar repositories for wise-ft:
Users that are interested in wise-ft are comparing it to the libraries listed below
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.☆689Updated 3 years ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆651Updated 2 years ago
- CLIP-like model evaluation☆693Updated 3 weeks ago
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆756Updated last year
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,191Updated 9 months ago
- Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch☆1,237Updated 2 years ago
- ☆608Updated last year
- DataComp: In search of the next generation of multimodal datasets☆699Updated last year
- Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time☆456Updated 9 months ago
- Grounded Language-Image Pre-training☆2,378Updated last year
- ☆515Updated 2 years ago
- A concise but complete implementation of CLIP with various experimental improvements from recent papers☆707Updated last year
- Official PyTorch implementation of GroupViT: Semantic Segmentation Emerges from Text Supervision, CVPR 2022.☆757Updated 2 years ago
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch☆1,129Updated last year
- Code release for SLIP Self-supervision meets Language-Image Pre-training☆767Updated 2 years ago
- [ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decode…☆847Updated last year
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆395Updated last year
- ❄️🔥 Visual Prompt Tuning [ECCV 2022] https://arxiv.org/abs/2203.12119☆1,103Updated last year
- [CVPR 2024] Alpha-CLIP: A CLIP Model Focusing on Wherever You Want☆805Updated 8 months ago
- ☆507Updated 5 months ago
- A method to increase the speed and lower the memory footprint of existing vision transformers.☆1,042Updated 10 months ago
- CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks☆412Updated last month
- Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)☆1,934Updated 11 months ago
- [CVPR 2024 🔥] Grounding Large Multimodal Model (GLaMM), the first-of-its-kind model capable of generating natural language responses tha…☆864Updated 4 months ago
- EVA Series: Visual Representation Fantasies from BAAI☆2,468Updated 8 months ago
- Hiera: A fast, powerful, and simple hierarchical vision transformer.☆971Updated last year
- Official Open Source code for "Scaling Language-Image Pre-training via Masking"☆420Updated 2 years ago
- ICLR2024 Spotlight: curation/training code, metadata, distribution and pre-trained models for MetaCLIP; CVPR 2024: MoDE: CLIP Data Expert…☆1,420Updated last month
- Official code for VisProg (CVPR 2023 Best Paper!)☆718Updated 7 months ago
- [CVPR 2023] Official repository of paper titled "MaPLe: Multi-modal Prompt Learning".☆735Updated last year