salesforce / MUST
PyTorch code for MUST
☆106Updated 2 years ago
Alternatives and similar repositories for MUST:
Users that are interested in MUST are comparing it to the libraries listed below
- A PyTorch implementation of Mugs proposed by our paper "Mugs: A Multi-Granular Self-Supervised Learning Framework".☆83Updated last year
- Filtering, Distillation, and Hard Negatives for Vision-Language Pre-Training☆135Updated 2 years ago
- This is a offical PyTorch/GPU implementation of SupMAE.☆77Updated 2 years ago
- ☆50Updated 2 years ago
- PyTorch implementation of the paper "MILAN: Masked Image Pretraining on Language Assisted Representation" https://arxiv.org/pdf/2208.0604…☆82Updated 2 years ago
- Code for the paper titled "CiT Curation in Training for Effective Vision-Language Data".☆78Updated 2 years ago
- This repo is the official implementation of UPL (Unsupervised Prompt Learning for Vision-Language Models).☆112Updated 2 years ago
- Code and Models for "GeneCIS A Benchmark for General Conditional Image Similarity"☆56Updated last year
- ☆59Updated 3 years ago
- PyTorch implementation of Asymmetric Siamese (https://arxiv.org/abs/2204.00613)☆100Updated 2 years ago
- code release of research paper "Exploring Long-Sequence Masked Autoencoders"☆99Updated 2 years ago
- 📍 Official pytorch implementation of paper "ProtoCLIP: Prototypical Contrastive Language Image Pretraining" (IEEE TNNLS)☆52Updated last year
- [NeurIPS 2021] ORL: Unsupervised Object-Level Representation Learning from Scene Images☆58Updated 3 years ago
- Official Code of ECCV 2022 paper MS-CLIP☆88Updated 2 years ago
- Reproducible scaling laws for contrastive language-image learning (https://arxiv.org/abs/2212.07143)☆157Updated last year
- Un-Mix: Rethinking Image Mixtures for Unsupervised Visual Representation Learning.☆151Updated 2 years ago
- TokenMix: Rethinking Image Mixing for Data Augmentation in Vision Transformers (ECCV 2022)☆93Updated 2 years ago
- [ECCV2022] New benchmark for evaluating pre-trained model; New supervised contrastive learning framework.☆107Updated last year
- CLIP Itself is a Strong Fine-tuner: Achieving 85.7% and 88.0% Top-1 Accuracy with ViT-B and ViT-L on ImageNet☆212Updated 2 years ago
- ☆64Updated last year
- Compress conventional Vision-Language Pre-training data☆49Updated last year
- [ICLR2024] Exploring Target Representations for Masked Autoencoders☆53Updated last year
- Official codes for ConMIM (ICLR 2023)☆58Updated 2 years ago
- ☆117Updated 2 years ago
- [NeurIPS2023] Official implementation and model release of the paper "What Makes Good Examples for Visual In-Context Learning?"☆173Updated last year
- ECCV2022,Bootstrapped Masked Autoencoders for Vision BERT Pretraining☆97Updated 2 years ago
- Release of ImageNet-Captions☆45Updated 2 years ago
- PyTorch code for "VL-Adapter: Parameter-Efficient Transfer Learning for Vision-and-Language Tasks" (CVPR2022)☆204Updated 2 years ago
- ☆184Updated last year
- Toolkit for Elevater Benchmark☆70Updated last year