SamsungLabs / eagle
Measuring and predicting on-device metrics (latency, power, etc.) of machine learning models
☆66Updated last year
Alternatives and similar repositories for eagle:
Users that are interested in eagle are comparing it to the libraries listed below
- [ICLR 2021] HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark☆109Updated last year
- Generic Neural Architecture Search via Regression (NeurIPS'21 Spotlight)☆36Updated 2 years ago
- ☆13Updated 3 years ago
- Zero-Cost Proxies for Lightweight NAS☆148Updated last year
- BSQ: Exploring Bit-Level Sparsity for Mixed-Precision Neural Network Quantization (ICLR 2021)☆40Updated 4 years ago
- Any-Precision Deep Neural Networks (AAAI 2021)☆58Updated 4 years ago
- Official PyTorch Implementation of HELP: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning (NeurIPS 2021 Spotlight…☆61Updated 6 months ago
- Post-training sparsity-aware quantization☆34Updated 2 years ago
- Official implementation for paper LIMPQ, "Mixed-Precision Neural Network Quantization via Learned Layer-wise Importance", ECCV 2022☆51Updated last year
- XNAS: An effective, modular, and flexible Neural Architecture Search (NAS) framework.☆48Updated 2 years ago
- aw_nas: A Modularized and Extensible NAS Framework☆247Updated last year
- PyTorch implementation of EdMIPS: https://arxiv.org/pdf/2004.05795.pdf☆58Updated 4 years ago
- ☆38Updated 2 years ago
- ☆43Updated last year
- DNN quantization with outlier channel splitting☆112Updated 4 years ago
- A pytorch implementation of DoReFa-Net☆134Updated 5 years ago
- TPAMI 2021: NATS-Bench: Benchmarking NAS Algorithms for Architecture Topology and Size☆178Updated 2 years ago
- ☆34Updated 2 years ago
- Code for ICML 2021 submission☆35Updated 3 years ago
- [ICLR 2022 Oral] F8Net: Fixed-Point 8-bit Only Multiplication for Network Quantization☆94Updated 2 years ago
- [CVPR 2020] APQ: Joint Search for Network Architecture, Pruning and Quantization Policy☆157Updated 4 years ago
- ☆25Updated 3 years ago
- Conditional channel- and precision-pruning on neural networks☆72Updated 5 years ago
- BitPack is a practical tool to efficiently save ultra-low precision/mixed-precision quantized models.☆51Updated 2 years ago
- The PyTorch implementation of Learned Step size Quantization (LSQ) in ICLR2020 (unofficial)☆128Updated 4 years ago
- ☆76Updated 11 months ago
- code for "AttentiveNAS Improving Neural Architecture Search via Attentive Sampling"☆104Updated 3 years ago
- ProxQuant: Quantized Neural Networks via Proximal Operators☆29Updated 6 years ago
- [ICLR2021 Outstanding Paper] Rethinking Architecture Selection in Differentiable NAS☆103Updated 3 years ago
- Implementation for the paper "Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization"☆73Updated 5 years ago