microsoft / random_quantizeLinks
a novel data augmentation method across data modalities
☆72Updated last year
Alternatives and similar repositories for random_quantize
Users that are interested in random_quantize are comparing it to the libraries listed below
Sorting:
- MixMIM: Mixed and Masked Image Modeling for Efficient Visual Representation Learning☆143Updated 2 years ago
- ☆57Updated 3 years ago
- Adaptive Token Sampling for Efficient Vision Transformers (ECCV 2022 Oral Presentation)☆102Updated last year
- [CVPR 2023] This repository includes the official implementation our paper "Masked Autoencoders Enable Efficient Knowledge Distillers"☆107Updated 2 years ago
- This is a PyTorch implementation of “Context AutoEncoder for Self-Supervised Representation Learning"☆198Updated 2 years ago
- (AAAI 2023 Oral) Pytorch implementation of "CF-ViT: A General Coarse-to-Fine Method for Vision Transformer"☆105Updated 2 years ago
- Code for You Only Cut Once: Boosting Data Augmentation with a Single Cut, ICML 2022.☆105Updated 2 years ago
- ☆87Updated last year
- LoMaR (Efficient Self-supervised Vision Pretraining with Local Masked Reconstruction)☆65Updated 4 months ago
- [CVPR 2023 Highlight] This is the official implementation of "Stitchable Neural Networks".☆247Updated 2 years ago
- Official Codes for "Uniform Masking: Enabling MAE Pre-training for Pyramid-based Vision Transformers with Locality"☆243Updated 2 years ago
- [NeurIPS2022] Official implementation of the paper 'Green Hierarchical Vision Transformer for Masked Image Modeling'.☆173Updated 2 years ago
- [AAAI 2022] This is the official PyTorch implementation of "Less is More: Pay Less Attention in Vision Transformers"☆97Updated 3 years ago
- [ECCV 2022] Implementation of the paper "Locality Guidance for Improving Vision Transformers on Tiny Datasets"☆80Updated 3 years ago
- [NeurIPS'23] DropPos: Pre-Training Vision Transformers by Reconstructing Dropped Positions☆61Updated last year
- PyTorch implementation of R-MAE https//arxiv.org/abs/2306.05411☆113Updated 2 years ago
- Official codes for ConMIM (ICLR 2023)☆60Updated 2 years ago
- [CVPR'23 & TPAMI'25] Hard Patches Mining for Masked Image Modeling & Bootstrap Masked Visual Modeling via Hard Patch Mining☆101Updated 4 months ago
- Official implementation for paper "LightViT: Towards Light-Weight Convolution-Free Vision Transformers"☆140Updated 3 years ago
- TokenMix: Rethinking Image Mixing for Data Augmentation in Vision Transformers (ECCV 2022)☆94Updated 2 years ago
- (CVPR2022) Official PyTorch Implementation of KDEP. Knowledge Distillation as Efficient Pre-training: Faster Convergence, Higher Data-eff…☆61Updated 3 years ago
- A Close Look at Spatial Modeling: From Attention to Convolution☆91Updated 2 years ago
- Python code for ICLR 2022 spotlight paper EViT: Expediting Vision Transformers via Token Reorganizations☆190Updated last year
- [ICLR2024] Exploring Target Representations for Masked Autoencoders☆56Updated last year
- Pytorch implementation of our paper accepted by ECCV2022 -- Knowledge Condensation Distillation https://arxiv.org/abs/2207.05409☆30Updated 2 years ago
- ☆62Updated 2 years ago
- Training ImageNet / CIFAR models with sota strategies and fancy techniques such as ViT, KD, Rep, etc.☆83Updated last year
- FastMIM, official pytorch implementation of our paper "FastMIM: Expediting Masked Image Modeling Pre-training for Vision"(https://arxiv.o…☆39Updated 2 years ago
- ☆261Updated 2 years ago
- Official implementation of paper "Knowledge Distillation from A Stronger Teacher", NeurIPS 2022☆147Updated 2 years ago