IHIaadj / HW-PR-NASLinks
HW-PR-NAS is a single surrogate model trained to Pareto rank the architectures based on Accuracy, Latency and energy consumption
☆13Updated 2 years ago
Alternatives and similar repositories for HW-PR-NAS
Users that are interested in HW-PR-NAS are comparing it to the libraries listed below
Sorting:
- [ICLR 2021] HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark☆111Updated 2 years ago
- μNAS is a neural architecture search (NAS) system that designs small-yet-powerful microcontroller-compatible neural networks.☆80Updated 4 years ago
- ☆20Updated 3 years ago
- ☆26Updated 2 years ago
- ☆76Updated 2 years ago
- Quantization in the Jagged Loss Landscape of Vision Transformers☆13Updated last year
- ☆17Updated 2 years ago
- BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization (ICLR 2021)☆40Updated 4 years ago
- Reproducing Quantization paper PACT☆64Updated 3 years ago
- Official implementation for ECCV 2022 paper LIMPQ - "Mixed-Precision Neural Network Quantization via Learned Layer-wise Importance"☆56Updated 2 years ago
- Official Implementation of Robustifying and Boosting Training-Free Neural Architecture Search☆11Updated last year
- ☆43Updated last year
- ☆39Updated 2 years ago
- [ICLR 2022 Oral] F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization☆95Updated 3 years ago
- The PyTorch implementation of Learned Step size Quantization (LSQ) in ICLR2020 (unofficial)☆135Updated 4 years ago
- ☆10Updated 2 years ago
- ☆153Updated 2 years ago
- A collection of research papers on efficient training of DNNs☆70Updated 3 years ago
- PyTorch implementation for the APoT quantization (ICLR 2020)☆277Updated 7 months ago
- DeiT implementation for Q-ViT☆25Updated 2 months ago
- Hybrid Tiny Hardware-aware Neural Architecture Search☆15Updated 2 years ago
- Neural Architecture Search for Neural Network Libraries☆59Updated last year
- Post-training sparsity-aware quantization☆34Updated 2 years ago
- Any-Precision Deep Neural Networks (AAAI 2021)☆60Updated 5 years ago
- AFP is a hardware-friendly quantization framework for DNNs, which is contributed by Fangxin Liu and Wenbo Zhao.☆13Updated 3 years ago
- Model compression by constrained optimization, using the Learning-Compression (LC) algorithm☆73Updated 3 years ago
- List of papers related to Vision Transformers quantization and hardware acceleration in recent AI conferences and journals.☆92Updated last year
- Official PyTorch Implementation of HELP: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning (NeurIPS 2021 Spotlight…☆63Updated 11 months ago
- Implementation of "NITI: Training Integer Neural Networks Using Integer-only Arithmetic" on arxiv☆84Updated 2 years ago
- Generic Neural Architecture Search via Regression (NeurIPS'21 Spotlight)☆36Updated 2 years ago