salesforce / ALBEF
Code for ALBEF: a new vision-language pre-training method
☆1,622Updated 2 years ago
Alternatives and similar repositories for ALBEF:
Users that are interested in ALBEF are comparing it to the libraries listed below
- Code for the ICML 2021 (long talk) paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"☆1,443Updated 11 months ago
- Prompt Learning for Vision-Language Models (IJCV'22, CVPR'22)☆1,909Updated 10 months ago
- TorchMultimodal is a PyTorch library for training state-of-the-art multimodal multi-task models at scale.☆1,561Updated this week
- Grounded Language-Image Pre-training☆2,356Updated last year
- X-VLM: Multi-Grained Vision Language Pre-Training (ICML 2022)☆471Updated 2 years ago
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,182Updated 8 months ago
- Official repository of OFA (ICML 2022). Paper: OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence L…☆2,480Updated 11 months ago
- awesome grounding: A curated list of research papers in visual grounding☆1,065Updated last year
- ❄️🔥 Visual Prompt Tuning [ECCV 2022] https://arxiv.org/abs/2203.12119☆1,093Updated last year
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆649Updated 2 years ago
- Recent Advances in Vision and Language PreTrained Models (VL-PTMs)☆1,152Updated 2 years ago
- PyTorch implementation of MoCo v3 https//arxiv.org/abs/2104.02057☆1,256Updated 3 years ago
- An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"☆926Updated 11 months ago
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch☆1,118Updated last year
- [ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decode…☆837Updated last year
- EVA Series: Visual Representation Fantasies from BAAI☆2,453Updated 7 months ago
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.☆683Updated 2 years ago
- Research code for ECCV 2020 paper "UNITER: UNiversal Image-TExt Representation Learning"☆792Updated 3 years ago
- [CVPR 2023] Official repository of paper titled "MaPLe: Multi-modal Prompt Learning".☆718Updated last year
- Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch☆1,235Updated 2 years ago
- METER: A Multimodal End-to-end TransformER Framework☆367Updated 2 years ago
- ☆506Updated 2 years ago
- ☆995Updated 2 years ago
- iBOT : Image BERT Pre-Training with Online Tokenizer (ICLR 2022)☆715Updated 2 years ago
- Recent Advances in Vision and Language Pre-training (VLP)☆293Updated last year
- CLIP-like model evaluation☆677Updated last month
- A curated list of prompt-based paper in computer vision and vision-language learning.☆913Updated last year
- Recent Transformer-based CV and related works.☆1,332Updated last year
- Robust fine-tuning of zero-shot models☆681Updated 2 years ago
- [ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383☆411Updated 2 years ago