rentainhe / ViT.pytorch
The Pytorch reimplementation of Vision Transformer
☆10Updated 3 years ago
Alternatives and similar repositories for ViT.pytorch:
Users that are interested in ViT.pytorch are comparing it to the libraries listed below
- [NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang…☆89Updated last year
- ☆26Updated 2 years ago
- (CVPR 2022) Automated Progressive Learning for Efficient Training of Vision Transformers☆25Updated last month
- Official implement of Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer☆71Updated 2 years ago
- [Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Prunin…☆40Updated 2 years ago
- [NeurIPS'22] What Makes a "Good" Data Augmentation in Knowledge Distillation -- A Statistical Perspective☆36Updated 2 years ago
- ☆24Updated 3 years ago
- [CVPR 2022] "The Principle of Diversity: Training Stronger Vision Transformers Calls for Reducing All Levels of Redundancy" by Tianlong C…☆25Updated 3 years ago
- Lightweight Transformer for Multi-modal Tasks☆15Updated 2 years ago
- A generic code base for neural network pruning, especially for pruning at initialization.☆30Updated 2 years ago
- Implementation of PGONAS for CVPR22W and RD-NAS for ICASSP23☆22Updated last year
- ☆9Updated 3 years ago
- Official pytorch implementation for CVPR2022 paper "Bootstrapping ViTs: Towards Liberating Vision Transformers from Pre-training"☆17Updated 2 years ago
- [ICLR'23] Trainability Preserving Neural Pruning (PyTorch)☆32Updated last year
- The official project website of "NORM: Knowledge Distillation via N-to-One Representation Matching" (The paper of NORM is published in IC…☆20Updated last year
- This repo is the official megengine implementation of the ECCV2022 paper: Efficient One Pass Self-distillation with Zipf's Label Smoothin…☆26Updated 2 years ago
- [Neurips 2022] “ Back Razor: Memory-Efficient Transfer Learning by Self-Sparsified Backpropogation”, Ziyu Jiang*, Xuxi Chen*, Xueqin Huan…☆19Updated 2 years ago
- Repository containing code for blockwise SSL training☆29Updated 5 months ago
- [ICLR 2022] "Unified Vision Transformer Compression" by Shixing Yu*, Tianlong Chen*, Jiayi Shen, Huan Yuan, Jianchao Tan, Sen Yang, Ji Li…☆52Updated last year
- [ICLR 2022]: Fast AdvProp☆35Updated 3 years ago
- [AAAI 2022] This is the official PyTorch implementation of "Less is More: Pay Less Attention in Vision Transformers"☆96Updated 2 years ago
- TF-FD☆20Updated 2 years ago
- 🔥 🔥 [WACV2024] Mini but Mighty: Finetuning ViTs with Mini Adapters☆19Updated 8 months ago
- BESA is a differentiable weight pruning technique for large language models.☆16Updated last year
- ☆22Updated 3 years ago
- Code implementation for paper "On the Efficacy of Small Self-Supervised Contrastive Models without Distillation Signals".☆16Updated 3 years ago
- [CVPR2021] Code for Landmark Regularization: Ranking Guided Super-Net Training in Neural Architecture Search☆9Updated 3 years ago
- Official Pytorch implementation of Super Vision Transformer (IJCV)☆43Updated last year
- ☆13Updated 9 months ago
- ☆57Updated 3 years ago