SHI-Labs / DiSparse-Multitask-Model-Compression
[CVPR 2022] DiSparse: Disentangled Sparsification for Multitask Model Compression
☆13Updated 2 years ago
Alternatives and similar repositories for DiSparse-Multitask-Model-Compression:
Users that are interested in DiSparse-Multitask-Model-Compression are comparing it to the libraries listed below
- [NeurIPS'21] "Chasing Sparsity in Vision Transformers: An End-to-End Exploration" by Tianlong Chen, Yu Cheng, Zhe Gan, Lu Yuan, Lei Zhang…☆90Updated last year
- Revisiting Parameter Sharing for Automatic Neural Channel Number Search, NeurIPS 2020☆20Updated 4 years ago
- [ICML 2022] ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks☆15Updated 2 years ago
- (CVPR 2022) Automated Progressive Learning for Efficient Training of Vision Transformers☆25Updated 2 years ago
- [Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Prunin…☆40Updated last year
- [ICLR'23] Trainability Preserving Neural Pruning (PyTorch)☆32Updated last year
- This is the pytorch implementation for the paper: Generalizable Mixed-Precision Quantization via Attribution Rank Preservation, which is…☆24Updated 3 years ago
- A generic code base for neural network pruning, especially for pruning at initialization.☆30Updated 2 years ago
- [ICML 2021 Oral] "CATE: Computation-aware Neural Architecture Encoding with Transformers" by Shen Yan, Kaiqiang Song, Fei Liu, Mi Zhang☆19Updated 3 years ago
- ☆16Updated 2 years ago
- [NeurIPS 2021] “Stronger NAS with Weaker Predictors“, Junru Wu, Xiyang Dai, Dongdong Chen, Yinpeng Chen, Mengchen Liu, Ye Yu, Zhangyang W…☆27Updated 2 years ago
- PyTorch implementation of "Deep Transferring Quantization" (ECCV2020)☆18Updated 2 years ago
- ☆22Updated 5 years ago
- Code for ViTAS_Vision Transformer Architecture Search☆52Updated 3 years ago
- [ECCV 2022] SuperTickets: Drawing Task-Agnostic Lottery Tickets from Supernets via Jointly Architecture Searching and Parameter Pruning☆19Updated 2 years ago
- [ICLR 2022] "Unified Vision Transformer Compression" by Shixing Yu*, Tianlong Chen*, Jiayi Shen, Huan Yuan, Jianchao Tan, Sen Yang, Ji Li…☆50Updated last year
- This repository implements the paper "Effective Training of Convolutional Neural Networks with Low-bitwidth Weights and Activations"☆20Updated 3 years ago
- NAS Benchmark in "Prioritized Architecture Sampling with Monto-Carlo Tree Search", CVPR2021☆38Updated 3 years ago
- Towards Compact CNNs via Collaborative Compression☆11Updated 2 years ago
- Code for ICML 2021 submission☆35Updated 3 years ago
- [NeurIPS'22] What Makes a "Good" Data Augmentation in Knowledge Distillation -- A Statistical Perspective☆36Updated 2 years ago
- ☆56Updated 3 years ago
- [ICLR 2022] "As-ViT: Auto-scaling Vision Transformers without Training" by Wuyang Chen, Wei Huang, Xianzhi Du, Xiaodan Song, Zhangyang Wa…☆76Updated 2 years ago
- [ICLR 2021] CompOFA: Compound Once-For-All Networks For Faster Multi-Platform Deployment☆24Updated 2 years ago
- The codebase for paper "PPT: Token Pruning and Pooling for Efficient Vision Transformer"☆20Updated 2 months ago
- Pytorch implementation of our paper accepted by IEEE TNNLS, 2021 -- Network Pruning using Adaptive Exemplar Filters☆22Updated 3 years ago
- Benchmarking Attention Mechanism in Vision Transformers.☆17Updated 2 years ago
- This is the official PyTorch implementation for "Sharpness-aware Quantization for Deep Neural Networks".☆40Updated 3 years ago
- [ICLR 2022] The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training by Shiwei Liu, Tianlo…☆73Updated 2 years ago
- ☆17Updated 2 years ago