facebookresearch / SLIPLinks
Code release for SLIP Self-supervision meets Language-Image Pre-training
☆767Updated 2 years ago
Alternatives and similar repositories for SLIP
Users that are interested in SLIP are comparing it to the libraries listed below
Sorting:
- A concise but complete implementation of CLIP with various experimental improvements from recent papers☆711Updated last year
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆655Updated 2 years ago
- A PyTorch Lightning solution to training OpenAI's CLIP from scratch.☆694Updated 3 years ago
- ☆1,008Updated 2 years ago
- iBOT : Image BERT Pre-Training with Online Tokenizer (ICLR 2022)☆727Updated 3 years ago
- Implementation of 🦩 Flamingo, state-of-the-art few-shot visual question answering attention net out of Deepmind, in Pytorch☆1,241Updated 2 years ago
- EsViT: Efficient self-supervised Vision Transformers☆412Updated last year
- [CVPR 2022] Official code for "Unified Contrastive Learning in Image-Text-Label Space"☆399Updated last year
- Official PyTorch implementation of GroupViT: Semantic Segmentation Emerges from Text Supervision, CVPR 2022.☆761Updated 3 years ago
- Implementation of CoCa, Contrastive Captioners are Image-Text Foundation Models, in Pytorch☆1,141Updated last year
- Code to reproduce the results in the FAIR research papers "Semi-Supervised Learning of Visual Features by Non-Parametrically Predicting V…☆488Updated 2 years ago
- [ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decode…☆851Updated last year
- Robust fine-tuning of zero-shot models☆705Updated 3 years ago
- CLIP-like model evaluation☆717Updated last week
- Pix2Seq codebase: multi-tasks with generative modeling (autoregressive and diffusion)☆912Updated last year
- Conceptual 12M is a dataset containing (image-URL, caption) pairs collected for vision-and-language pre-training.☆391Updated 2 years ago
- Omnivore: A Single Model for Many Visual Modalities☆564Updated 2 years ago
- [ICLR 2022] code for "How Much Can CLIP Benefit Vision-and-Language Tasks?" https://arxiv.org/abs/2107.06383☆411Updated 2 years ago
- Masked Siamese Networks for Label-Efficient Learning (https://arxiv.org/abs/2204.07141)☆456Updated 3 years ago
- [CVPR 2022] Official code for "RegionCLIP: Region-based Language-Image Pretraining"☆765Updated last year
- [CVPR 2022] DenseCLIP: Language-Guided Dense Prediction with Context-Aware Prompting☆535Updated last year
- PyTorch implementation of MoCo v3 https//arxiv.org/abs/2104.02057☆1,269Updated 3 years ago
- Language-Driven Semantic Segmentation☆790Updated 5 months ago
- This is an official implementation for "SimMIM: A Simple Framework for Masked Image Modeling".☆975Updated 2 years ago
- [CVPR 2021] VirTex: Learning Visual Representations from Textual Annotations☆563Updated last year
- Neighborhood Attention Transformer, arxiv 2022 / CVPR 2023. Dilated Neighborhood Attention Transformer, arxiv 2022☆1,117Updated last year
- MultiMAE: Multi-modal Multi-task Masked Autoencoders, ECCV 2022☆578Updated 2 years ago
- Awesome list for research on CLIP (Contrastive Language-Image Pre-Training).☆1,199Updated 11 months ago
- Implementation of popular SOTA self-supervised learning algorithms as Fastai Callbacks.☆320Updated 2 years ago
- A method to increase the speed and lower the memory footprint of existing vision transformers.☆1,054Updated 11 months ago