eksuas / eenets.pytorchLinks
Pytorch implementation of EENets
☆19Updated 9 months ago
Alternatives and similar repositories for eenets.pytorch
Users that are interested in eenets.pytorch are comparing it to the libraries listed below
Sorting:
- [ICLR-2020] Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With Trainable Masked Layers.☆31Updated 5 years ago
- ☆48Updated 5 years ago
- ☆25Updated 3 years ago
- This is the pytorch implementation for the paper: Generalizable Mixed-Precision Quantization via Attribution Rank Preservation, which is…☆25Updated 3 years ago
- [NeurIPS 2021] “Stronger NAS with Weaker Predictors“, Junru Wu, Xiyang Dai, Dongdong Chen, Yinpeng Chen, Mengchen Liu, Ye Yu, Zhangyang W…☆27Updated 2 years ago
- Codebase for the paper "A Gradient Flow Framework for Analyzing Network Pruning"☆21Updated 4 years ago
- [CVPR 2021] Searching by Generating: Flexible and Efficient One-Shot NAS with Architecture Generator☆38Updated 3 years ago
- [CVPRW 21] "BNN - BN = ? Training Binary Neural Networks without Batch Normalization", Tianlong Chen, Zhenyu Zhang, Xu Ouyang, Zechun Liu…☆57Updated 3 years ago
- ☆22Updated 5 years ago
- [ICML 2022] ShiftAddNAS: Hardware-Inspired Search for More Accurate and Efficient Neural Networks☆16Updated 3 years ago
- The code for Joint Neural Architecture Search and Quantization☆13Updated 6 years ago
- [NeurIPS 2020] ShiftAddNet: A Hardware-Inspired Deep Network☆73Updated 4 years ago
- Official codebase for our paper "Joslim: Joint Widths and Weights Optimization for Slimmable Neural Networks"☆12Updated 3 years ago
- Official PyTorch Implementation of "Learning Architectures for Binary Networks" (ECCV2020)☆26Updated 4 years ago
- This repository implements the paper "Effective Training of Convolutional Neural Networks with Low-bitwidth Weights and Activations"☆20Updated 3 years ago
- Position-based Scaled Gradient for Model Quantization and Pruning Code (NeurIPS 2020)☆26Updated 4 years ago
- Code for High-Capacity Expert Binary Networks (ICLR 2021).☆27Updated 3 years ago
- ☆10Updated 4 years ago
- [ICML 2021 Oral] "CATE: Computation-aware Neural Architecture Encoding with Transformers" by Shen Yan, Kaiqiang Song, Fei Liu, Mi Zhang☆19Updated 4 years ago
- A PyTorch Implementation of Feature Boosting and Suppression☆18Updated 4 years ago
- [NeurIPS‘2021] "MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge", Geng Yuan, Xiaolong Ma, Yanzhi Wang et al…☆18Updated 3 years ago
- Revisiting Parameter Sharing for Automatic Neural Channel Number Search, NeurIPS 2020☆21Updated 4 years ago
- Codes for Understanding Architectures Learnt by Cell-based Neural Architecture Search☆27Updated 5 years ago
- [CVPR 2022] DiSparse: Disentangled Sparsification for Multitask Model Compression☆13Updated 2 years ago
- How Do Adam and Training Strategies Help BNNs Optimization? In ICML 2021.☆60Updated 4 years ago
- ☆25Updated 3 years ago
- Codes for Accepted Paper : "MetaQuant: Learning to Quantize by Learning to Penetrate Non-differentiable Quantization" in NeurIPS 2019☆54Updated 5 years ago
- Towards Compact CNNs via Collaborative Compression☆11Updated 3 years ago
- [ICLR 2022] The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training by Shiwei Liu, Tianlo…☆73Updated 2 years ago
- [NeurIPS 2019] E2-Train: Training State-of-the-art CNNs with Over 80% Less Energy☆21Updated 5 years ago