[ICLR 2022] "Unified Vision Transformer Compression" by Shixing Yu*, Tianlong Chen*, Jiayi Shen, Huan Yuan, Jianchao Tan, Sen Yang, Ji Liu, Zhangyang Wang
☆54Dec 1, 2023Updated 2 years ago
Alternatives and similar repositories for UVC
Users that are interested in UVC are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- ☆13Sep 24, 2023Updated 2 years ago
- [NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang…☆88Dec 1, 2023Updated 2 years ago
- ☆48Aug 7, 2023Updated 2 years ago
- [ICML 2021] "Auto-NBA: Efficient and Effective Search Over the Joint Space of Networks, Bitwidths, and Accelerators" by Yonggan Fu, Yonga…☆16Jan 3, 2022Updated 4 years ago
- Official code for our CVPR'22 paper “Vision Transformer Slimming: Multi-Dimension Searching in Continuous Optimization Space”☆251Aug 24, 2025Updated 8 months ago
- End-to-end encrypted email - Proton Mail • AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- [ICCV 23]An approach to enhance the efficiency of Vision Transformer (ViT) by concurrently employing token pruning and token merging tech…☆103Jul 14, 2023Updated 2 years ago
- Structured Neuron Level Pruning to compress Transformer-based models [ECCV'24]☆17Aug 7, 2024Updated last year
- Official implement of Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer☆74Jul 13, 2022Updated 3 years ago
- ☆28Nov 29, 2022Updated 3 years ago
- ☆20Apr 24, 2022Updated 4 years ago
- Python code for ICLR 2022 spotlight paper EViT: Expediting Vision Transformers via Token Reorganizations☆198Sep 3, 2023Updated 2 years ago
- Code corresponding to the paper: "On the Robustness of Vision Transformers": https://arxiv.org/abs/2104.02610☆25Dec 16, 2025Updated 4 months ago
- ☆13Jun 16, 2024Updated last year
- [ICML 2023] UPop: Unified and Progressive Pruning for Compressing Vision-Language Transformers☆105Dec 30, 2024Updated last year
- GPU virtual machines on DigitalOcean Gradient AI • AdGet to production fast with high-performance AMD and NVIDIA GPUs you can spin up in seconds. The definition of operational simplicity.
- CoCoFL: Communication- and Computation-Aware Federated Learning via Partial NN Freezing and Quantization☆13Aug 3, 2024Updated last year
- [ICASSP'20] DNN-Chip Predictor: An Analytical Performance Predictor for DNN Accelerators with Various Dataflows and Hardware Architecture…☆25Oct 1, 2022Updated 3 years ago
- Qimera: Data-free Quantization with Synthetic Boundary Supporting Samples [NeurIPS 2021]☆34Dec 12, 2021Updated 4 years ago
- ☆29Dec 5, 2023Updated 2 years ago
- [TPAMI 2024] This is the official repository for our paper: ''Pruning Self-attentions into Convolutional Layers in Single Path''.☆116Dec 30, 2023Updated 2 years ago
- [ICML 2022] "Coarsening the Granularity: Towards Structurally Sparse Lottery Tickets" by Tianlong Chen, Xuxi Chen, Xiaolong Ma, Yanzhi Wa…☆33Apr 9, 2023Updated 3 years ago
- Recent Advances on Efficient Vision Transformers☆55Jan 11, 2023Updated 3 years ago
- [CVPR'23 Highlight] Heterogeneous Continual Learning.☆15Dec 5, 2023Updated 2 years ago
- Ensemble Knowledge Guided Sub-network Search and Fine-tuning for Filter Pruning☆19Sep 20, 2022Updated 3 years ago
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- [CVPR'23] SparseViT: Revisiting Activation Sparsity for Efficient High-Resolution Vision Transformer☆82Apr 24, 2024Updated 2 years ago
- Code for ICML 2022 paper "SPDY: Accurate Pruning with Speedup Guarantees"☆20May 3, 2023Updated 2 years ago
- [AAAI 2024] Fluctuation-based Adaptive Structured Pruning for Large Language Models☆75Jan 6, 2024Updated 2 years ago
- Official pytorch code for "APP: Anytime Progressive Pruning" (DyNN @ ICML, 2022; CLL @ ACML, 2022, SNN @ ICML, 2022 and SlowDNN 2023)☆16Nov 22, 2022Updated 3 years ago
- [ICLR 2022] Code for paper "Exploring Extreme Parameter Compression for Pre-trained Language Models"(https://arxiv.org/abs/2205.10036)☆22May 24, 2023Updated 2 years ago
- Code for the NeurIPS 2022 paper "Optimal Brain Compression: A Framework for Accurate Post-Training Quantization and Pruning".☆131Jul 11, 2023Updated 2 years ago
- source code of the paper: Robust Quantization: One Model to Rule Them All☆40Mar 24, 2023Updated 3 years ago
- Code for the ICML 2023 paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot".☆880Aug 20, 2024Updated last year
- ☆12Sep 26, 2019Updated 6 years ago
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- This is a collection of our NAS and Vision Transformer work.☆1,834Jul 25, 2024Updated last year
- Official implementation of the paper Vision Transformer with Progressive Sampling, ICCV 2021.☆153Jan 14, 2022Updated 4 years ago
- EfficientVLM: Fast and Accurate Vision-Language Models via Knowledge Distillation and Modal-adaptive Pruning (ACL 2023)☆34Jul 18, 2023Updated 2 years ago
- Self-Similarity Priors: Neural Collages as Differentiable Fractal Representations☆30Nov 26, 2022Updated 3 years ago
- Repo for the paper "Extrapolating from a Single Image to a Thousand Classes using Distillation"☆36Jul 16, 2024Updated last year
- Code for Learned Thresholds Token Merging and Pruning for Vision Transformers (LTMP). A technique to reduce the size of Vision Transforme…☆17Nov 24, 2024Updated last year
- Code for the ICML 2021 paper "Sharing Less is More: Lifelong Learning in Deep Networks with Selective Layer Transfer"☆12Aug 17, 2021Updated 4 years ago