Masked Siamese Networks for Label-Efficient Learning (https://arxiv.org/abs/2204.07141)
☆464May 9, 2022Updated 3 years ago
Alternatives and similar repositories for msn
Users that are interested in msn are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- PyTorch implementation of Asymmetric Siamese (https://arxiv.org/abs/2204.00613)☆99May 2, 2022Updated 3 years ago
- iBOT : Image BERT Pre-Training with Online Tokenizer (ICLR 2022)☆767Apr 14, 2022Updated 3 years ago
- VISSL is FAIR's library of extensible, modular and scalable components for SOTA Self-Supervised Learning with images.☆3,294Mar 3, 2024Updated 2 years ago
- ConvMAE: Masked Convolution Meets Masked Autoencoders☆523Mar 14, 2023Updated 3 years ago
- EsViT: Efficient self-supervised Vision Transformers☆412Aug 28, 2023Updated 2 years ago
- This is an official implementation for "SimMIM: A Simple Framework for Masked Image Modeling".☆1,029Sep 29, 2022Updated 3 years ago
- Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities☆80Jan 7, 2026Updated 2 months ago
- Code to reproduce the results in the FAIR research papers "Semi-Supervised Learning of Visual Features by Non-Parametrically Predicting V…☆492Apr 28, 2023Updated 2 years ago
- MultiMAE: Multi-modal Multi-task Masked Autoencoders, ECCV 2022☆617Dec 13, 2022Updated 3 years ago
- [ICCV 2023] You Only Look at One Partial Sequence☆343Oct 21, 2023Updated 2 years ago
- [ECCV2022] Dense Siamese Network for Dense Unsupervised Learning☆29Jul 21, 2022Updated 3 years ago
- PyTorch implementation of MoCo v3 https//arxiv.org/abs/2104.02057☆1,321Nov 25, 2021Updated 4 years ago
- solo-learn: a library of self-supervised methods for visual representation learning powered by Pytorch Lightning☆1,553Mar 16, 2026Updated last week
- Code release for SLIP Self-supervision meets Language-Image Pre-training☆787Feb 9, 2023Updated 3 years ago
- A PyTorch implementation of Mugs proposed by our paper "Mugs: A Multi-Granular Self-Supervised Learning Framework".☆84Feb 13, 2024Updated 2 years ago
- PyTorch code for Vision Transformers training with the Self-Supervised learning method DINO☆7,485Jul 3, 2024Updated last year
- VICRegL official code base☆233Feb 22, 2023Updated 3 years ago
- PyTorch implementation of SwAV https//arxiv.org/abs/2006.09882☆2,090Apr 13, 2023Updated 2 years ago
- Official code Cross-Covariance Image Transformer (XCiT)☆674Sep 28, 2021Updated 4 years ago
- PyTorch implementation of MAE https//arxiv.org/abs/2111.06377☆8,243Jul 23, 2024Updated last year
- code release of research paper "Exploring Long-Sequence Masked Autoencoders"☆100Oct 14, 2022Updated 3 years ago
- Official repository for "Revisiting Weakly Supervised Pre-Training of Visual Perception Models". https://arxiv.org/abs/2201.08371.☆182Apr 17, 2022Updated 3 years ago
- This is a PyTorch implementation of “Context AutoEncoder for Self-Supervised Representation Learning"☆197Jan 11, 2023Updated 3 years ago
- Supervision Exists Everywhere: A Data Efficient Contrastive Language-Image Pre-training Paradigm☆677Sep 19, 2022Updated 3 years ago
- Official DeiT repository☆4,327Mar 15, 2024Updated 2 years ago
- [NeurIPS2022] Official implementation of the paper 'Green Hierarchical Vision Transformer for Masked Image Modeling'.☆177Jan 16, 2023Updated 3 years ago
- Omnivore: A Single Model for Many Visual Modalities☆572Nov 12, 2022Updated 3 years ago
- [ECCV2022] New benchmark for evaluating pre-trained model; New supervised contrastive learning framework.☆110Dec 8, 2023Updated 2 years ago
- ECCV2022,Bootstrapped Masked Autoencoders for Vision BERT Pretraining☆97Nov 2, 2022Updated 3 years ago
- A method to increase the speed and lower the memory footprint of existing vision transformers.☆1,174Jun 17, 2024Updated last year
- This is a offical PyTorch/GPU implementation of SupMAE.☆79Aug 30, 2022Updated 3 years ago
- OpenMMLab Self-Supervised Learning Toolbox and Benchmark☆3,297Jun 25, 2023Updated 2 years ago
- ☆16Jul 7, 2023Updated 2 years ago
- [NeurIPS 2022 Spotlight] VideoMAE: Masked Autoencoders are Data-Efficient Learners for Self-Supervised Video Pre-Training