xfey / pytorch-BigNASView external linksLinks
PyTorch implementation of BigNAS: Scaling Up Neural Architecture Search with Big Single-Stage Models
☆29Aug 22, 2022Updated 3 years ago
Alternatives and similar repositories for pytorch-BigNAS
Users that are interested in pytorch-BigNAS are comparing it to the libraries listed below
Sorting:
- XNAS: An effective, modular, and flexible Neural Architecture Search (NAS) framework.☆47Jun 29, 2022Updated 3 years ago
- Neural Architecture Search for Neural Network Libraries☆61Jan 22, 2024Updated 2 years ago
- ☆20Apr 27, 2021Updated 4 years ago
- Revisiting Parameter Sharing for Automatic Neural Channel Number Search, NeurIPS 2020☆22Nov 15, 2020Updated 5 years ago
- ☆10Jul 27, 2020Updated 5 years ago
- ☆28Apr 26, 2023Updated 2 years ago
- [NeurIPS 2021] “Stronger NAS with Weaker Predictors“, Junru Wu, Xiyang Dai, Dongdong Chen, Yinpeng Chen, Mengchen Liu, Ye Yu, Zhangyang W…☆27Sep 23, 2022Updated 3 years ago
- ☆11Jan 10, 2025Updated last year
- [ICLR 2021] HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark☆114Apr 18, 2023Updated 2 years ago
- ☆25Dec 11, 2021Updated 4 years ago
- Neural Network Quantization With Fractional Bit-widths☆11Feb 19, 2021Updated 4 years ago
- [TMLR] Official PyTorch implementation of paper "Efficient Quantization-aware Training with Adaptive Coreset Selection"☆37Aug 20, 2024Updated last year
- Official PyTorch implementation of "Evolving Search Space for Neural Architecture Search"☆12Aug 18, 2021Updated 4 years ago
- ☆12May 22, 2022Updated 3 years ago
- The official implementation of paper PreNAS: Preferred One-Shot Learning Towards Efficient Neural Architecture Search☆31Sep 5, 2023Updated 2 years ago
- Harmonic-NAS: Hardware-Aware Multimodal Neural Architecture Search on Resource-constrained Devices (ACML 2023)☆16May 7, 2024Updated last year
- ICML2019 Accepted Paper. Overcoming Multi-Model Forgetting☆14Jun 5, 2019Updated 6 years ago
- Encodings for neural architecture search☆29Apr 5, 2021Updated 4 years ago
- HW-PR-NAS is a single surrogate model trained to Pareto rank the architectures based on Accuracy, Latency and energy consumption☆15Oct 15, 2022Updated 3 years ago
- The code for Joint Neural Architecture Search and Quantization☆14Apr 10, 2019Updated 6 years ago
- AlphaNet Improved Training of Supernet with Alpha-Divergence☆100Aug 12, 2021Updated 4 years ago
- ☆17Jul 10, 2022Updated 3 years ago
- code for NASViT☆67Apr 25, 2022Updated 3 years ago
- This repository contains the publishable code for CVPR 2021 paper TransNAS-Bench-101: Improving Transferrability and Generalizability of …☆24Apr 11, 2023Updated 2 years ago
- ☆20Mar 9, 2023Updated 2 years ago
- PyTorch implementation of Near-Lossless Post-Training Quantization of Deep Neural Networks via a Piecewise Linear Approximation☆23Feb 17, 2020Updated 5 years ago
- ☆24Dec 21, 2021Updated 4 years ago
- ☆26Apr 12, 2022Updated 3 years ago
- Code for "ECoFLaP: Efficient Coarse-to-Fine Layer-Wise Pruning for Vision-Language Models" (ICLR 2024)☆20Feb 16, 2024Updated last year
- ☆21Feb 11, 2022Updated 4 years ago
- [ICML 2021 Oral] "CATE: Computation-aware Neural Architecture Encoding with Transformers" by Shen Yan, Kaiqiang Song, Fei Liu, Mi Zhang☆19Jun 23, 2021Updated 4 years ago
- Implementation of PGONAS for CVPR22W and RD-NAS for ICASSP23☆23Apr 25, 2023Updated 2 years ago
- ☆28Oct 21, 2020Updated 5 years ago
- Auto^6ML is a jittor library allowing users to achieve machine learning automation.☆26Sep 28, 2024Updated last year
- Repository for "Accelerating Neural Architecture Search using Performance Prediction" (ICLR Workshop 2018)☆18Mar 21, 2018Updated 7 years ago
- This is the pytorch implementation for the paper: Generalizable Mixed-Precision Quantization via Attribution Rank Preservation, which is…☆24Aug 17, 2021Updated 4 years ago
- ☆73Dec 16, 2025Updated last month
- code for "AttentiveNAS Improving Neural Architecture Search via Attentive Sampling"☆105Sep 29, 2021Updated 4 years ago
- Official implementation for ECCV 2022 paper LIMPQ - "Mixed-Precision Neural Network Quantization via Learned Layer-wise Importance"☆61Mar 19, 2023Updated 2 years ago