sony / nnabla-nasLinks
Neural Architecture Search for Neural Network Libraries
☆60Updated last year
Alternatives and similar repositories for nnabla-nas
Users that are interested in nnabla-nas are comparing it to the libraries listed below
Sorting:
- AlphaNet Improved Training of Supernet with Alpha-Divergence☆100Updated 4 years ago
- ☆43Updated last year
- code for NASViT☆67Updated 3 years ago
- Code for ICML 2022 paper "SPDY: Accurate Pruning with Speedup Guarantees"☆20Updated 2 years ago
- ☆78Updated 3 years ago
- Binarize convolutional neural networks using pytorch☆149Updated 3 years ago
- ☆25Updated 4 years ago
- A research library for pytorch-based neural network pruning, compression, and more.☆163Updated 3 years ago
- ☆28Updated 2 years ago
- Official implementation for ECCV 2022 paper LIMPQ - "Mixed-Precision Neural Network Quantization via Learned Layer-wise Importance"☆61Updated 2 years ago
- [ICCV-2023] EMQ: Evolving Training-free Proxies for Automated Mixed Precision Quantization☆28Updated 2 years ago
- [ICLR 2022] "Learning Pruning-Friendly Networks via Frank-Wolfe: One-Shot, Any-Sparsity, and No Retraining" by Lu Miao*, Xiaolong Luo*, T…☆33Updated 3 years ago
- [ICLR 2021] HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark☆113Updated 2 years ago
- [ICLR 2022 Oral] F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization☆93Updated 3 years ago
- [ICML 2022] "DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks", by Yonggan …☆72Updated 3 years ago
- ☆48Updated 5 years ago
- [ICLR 2022] The Unreasonable Effectiveness of Random Pruning: Return of the Most Naive Baseline for Sparse Training by Shiwei Liu, Tianlo…☆77Updated 2 years ago
- Train neural networks with joint quantization and pruning on both weights and activations using any pytorch modules☆43Updated 3 years ago
- Code for High-Capacity Expert Binary Networks (ICLR 2021).☆27Updated 4 years ago
- Post-training sparsity-aware quantization☆34Updated 2 years ago
- PyTorch implementation of Towards Efficient Training for Neural Network Quantization☆16Updated 5 years ago
- This is the official PyTorch implementation for "Sharpness-aware Quantization for Deep Neural Networks".☆43Updated 4 years ago
- ☆229Updated 4 years ago
- ☆26Updated 3 years ago
- Position-based Scaled Gradient for Model Quantization and Pruning Code (NeurIPS 2020)☆25Updated 5 years ago
- μNAS is a neural architecture search (NAS) system that designs small-yet-powerful microcontroller-compatible neural networks.☆82Updated 4 years ago
- [ICLR 2021] CompOFA: Compound Once-For-All Networks For Faster Multi-Platform Deployment☆25Updated 2 years ago
- Dynamic Neural Architecture Search Toolkit☆31Updated last year
- [NeurIPS 2022 Spotlight] This is the official PyTorch implementation of "EcoFormer: Energy-Saving Attention with Linear Complexity"☆74Updated 3 years ago
- [NeurIPS 2020] ShiftAddNet: A Hardware-Inspired Deep Network☆74Updated 5 years ago