rentainhe / ViT.pytorch
The Pytorch reimplementation of Vision Transformer
☆10Updated 3 years ago
Related projects ⓘ
Alternatives and complementary repositories for ViT.pytorch
- TF-FD☆20Updated last year
- (CVPR 2022) Automated Progressive Learning for Efficient Training of Vision Transformers☆25Updated 2 years ago
- Official implement of Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer☆69Updated 2 years ago
- ☆24Updated last year
- [NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang…☆90Updated 11 months ago
- Official code for CVPR 2022 paper "Relieving Long-tailed Instance Segmentation via Pairwise Class Balance".☆37Updated 2 years ago
- [CVPR2022] Official Implementation of the paper 'Learning Where to Learn in Cross-View Self-Supervised Learning'☆26Updated 2 years ago
- Official implementation of the paper "Function-Consistent Feature Distillation" (ICLR 2023)☆26Updated last year
- [ICLR 2022]: Fast AdvProp☆35Updated 2 years ago
- The official project website of "NORM: Knowledge Distillation via N-to-One Representation Matching" (The paper of NORM is published in IC…☆19Updated last year
- [Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Prunin…☆40Updated last year
- Lightweight Transformer for Multi-modal Tasks☆15Updated last year
- ☆22Updated 5 years ago
- ☆24Updated 2 years ago
- ☆23Updated 11 months ago
- Bag of Instances Aggregation Boosts Self-supervised Distillation (ICLR 2022)☆33Updated 2 years ago
- ☆21Updated 3 years ago
- Implementation of PGONAS for CVPR22W and RD-NAS for ICASSP23☆23Updated last year
- [ICLR'23] Trainability Preserving Neural Pruning (PyTorch)☆31Updated last year
- ☆11Updated 2 years ago
- CVPR2022: Meta-attention for ViT-backed Continual Learning☆34Updated 2 years ago
- Pytorch implementation of our paper accepted by IEEE TNNLS, 2022 -- Distilling a Powerful Student Model via Online Knowledge Distillation☆28Updated 3 years ago
- Beyond Masking: Demystifying Token-Based Pre-Training for Vision Transformers☆26Updated 2 years ago
- Repo for the paper "Extrapolating from a Single Image to a Thousand Classes using Distillation"☆37Updated 3 months ago
- Code implementation for paper "On the Efficacy of Small Self-Supervised Contrastive Models without Distillation Signals".☆16Updated 2 years ago
- ISD: Self-Supervised Learning by Iterative Similarity Distillation☆36Updated 3 years ago
- This repo is the official megengine implementation of the ECCV2022 paper: Efficient One Pass Self-distillation with Zipf's Label Smoothin…☆25Updated 2 years ago
- Benchmarking Attention Mechanism in Vision Transformers.☆16Updated 2 years ago
- [NeurIPS'22] What Makes a "Good" Data Augmentation in Knowledge Distillation -- A Statistical Perspective☆36Updated last year
- ☆16Updated 2 years ago