jovitalukasik / AG-NetLinks
Code for "Learning Where To Look – Generative NAS is Surprisingly Efficient"
☆15Updated 3 years ago
Alternatives and similar repositories for AG-Net
Users that are interested in AG-Net are comparing it to the libraries listed below
Sorting:
- Smooth Variational Graph Embeddings for Efficient Neural Architecture Search☆16Updated last year
- GitHub repository of the ICLR 2023 paper "Neural Architecture Design and Robustness: A Dataset"☆15Updated 2 years ago
- ☆47Updated 2 years ago
- Code for ViTAS_Vision Transformer Architecture Search☆50Updated 4 years ago
- This is the official implementation of our BMVC 2022 paper "SP-ViT: Learning 2D Spatial Priors for Vision Transformers"☆12Updated 2 years ago
- Code for our ICLR'2022 paper "Generalizing Few-Shot NAS with Gradient Matching"☆22Updated 2 years ago
- [ICCV 23]An approach to enhance the efficiency of Vision Transformer (ViT) by concurrently employing token pruning and token merging tech…☆101Updated 2 years ago
- code for NASViT☆66Updated 3 years ago
- Official PyTorch implementation of our ECCV 2022 paper "Sliced Recursive Transformer"☆66Updated 3 years ago
- Official PyTorch implementation of A-ViT: Adaptive Tokens for Efficient Vision Transformer (CVPR 2022)☆163Updated 3 years ago
- ☆27Updated 2 years ago
- Learning recognition/segmentation models without end-to-end training. 40%-60% less GPU memory footprint. Same training time. Better perfo…☆90Updated 3 years ago
- ☆24Updated 3 years ago
- The implementation of our paper: Towards Robust Vision Transformer (CVPR2022)☆142Updated 3 years ago
- (ICCV 2021) BossNAS: Exploring Hybrid CNN-transformers with Block-wisely Self-supervised Neural Architecture Search☆141Updated 3 years ago
- ☆57Updated 3 years ago
- Official repository for our paper Robust Models are less Over-Confident☆20Updated 7 months ago
- [ICLR'23] Trainability Preserving Neural Pruning (PyTorch)☆33Updated 2 years ago
- Python code for ICLR 2022 spotlight paper EViT: Expediting Vision Transformers via Token Reorganizations☆193Updated 2 years ago
- Official PyTorch implementation of PS-KD☆90Updated 3 years ago
- Adaptive Token Sampling for Efficient Vision Transformers (ECCV 2022 Oral Presentation)☆104Updated last year
- A generic code base for neural network pruning, especially for pruning at initialization.☆31Updated 3 years ago
- Official implement of Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer☆73Updated 3 years ago
- [NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang…☆89Updated last year
- [NeurIPS 2022] “M³ViT: Mixture-of-Experts Vision Transformer for Efficient Multi-task Learning with Model-Accelerator Co-design”, Hanxue …☆131Updated 2 years ago
- [ICLR 2022] "Unified Vision Transformer Compression" by Shixing Yu*, Tianlong Chen*, Jiayi Shen, Huan Yuan, Jianchao Tan, Sen Yang, Ji Li…☆53Updated last year
- Code for Paper "Self-Distillation from the Last Mini-Batch for Consistency Regularization"☆43Updated 3 years ago
- [NeurIPS'22] What Makes a "Good" Data Augmentation in Knowledge Distillation -- A Statistical Perspective☆37Updated 2 years ago
- (Pytorch) Training ResNets on ImageNet-100 data☆64Updated 3 years ago
- Pytorch implementation of our paper accepted by ECCV2022 -- Knowledge Condensation Distillation https://arxiv.org/abs/2207.05409☆30Updated 2 years ago