aaronserianni / training-free-nasLinks
[ACL'22] Training-free Neural Architecture Search for RNNs and Transformers
☆14Updated last year
Alternatives and similar repositories for training-free-nas
Users that are interested in training-free-nas are comparing it to the libraries listed below
Sorting:
- Implementation of PGONAS for CVPR22W and RD-NAS for ICASSP23☆22Updated 2 years ago
- ☆23Updated 3 years ago
- [Preprint] Why is the State of Neural Network Pruning so Confusing? On the Fairness, Comparison Setup, and Trainability in Network Prunin…☆41Updated 3 months ago
- Official PyTorch implementation of "Meta-prediction Model for Distillation-Aware NAS on Unseen Datasets" (ICLR 2023 notable top 25%)☆26Updated last year
- [NeurIPS 2024] Search for Efficient LLMs☆15Updated 11 months ago
- The official project website of "NORM: Knowledge Distillation via N-to-One Representation Matching" (The paper of NORM is published in IC…☆20Updated 2 years ago
- [ICCV-2023] EMQ: Evolving Training-free Proxies for Automated Mixed Precision Quantization☆28Updated 2 years ago
- [ICLR 2024] This is the official PyTorch implementation of "QLLM: Accurate and Efficient Low-Bitwidth Quantization for Large Language Mod…☆30Updated last year
- [ICLR'23] Trainability Preserving Neural Pruning (PyTorch)☆34Updated 2 years ago
- KVTuner: Sensitivity-Aware Layer-wise Mixed Precision KV Cache Quantization for Efficient and Nearly Lossless LLM Inference☆24Updated 7 months ago
- [TMLR] Official PyTorch implementation of paper "Efficient Quantization-aware Training with Adaptive Coreset Selection"☆35Updated last year
- Official PyTorch implementation of "Evolving Search Space for Neural Architecture Search"☆12Updated 4 years ago
- ☆17Updated 3 years ago
- BESA is a differentiable weight pruning technique for large language models.☆17Updated last year
- A highly modular PyTorch framework with a focus on Neural Architecture Search (NAS).☆23Updated 4 years ago
- Code for RepNAS☆14Updated 4 years ago
- [NeurIPS 2022 Spotlight] This is the official PyTorch implementation of "EcoFormer: Energy-Saving Attention with Linear Complexity"☆74Updated 3 years ago
- ☆25Updated 4 years ago
- Code to reproduce the experiments of the ICLR24-paper: "Sparse Model Soups: A Recipe for Improved Pruning via Model Averaging"☆12Updated 2 months ago
- [ICLR 2022] "Learning Pruning-Friendly Networks via Frank-Wolfe: One-Shot, Any-Sparsity, and No Retraining" by Lu Miao*, Xiaolong Luo*, T…☆33Updated 3 years ago
- [ICLR 2022] "Unified Vision Transformer Compression" by Shixing Yu*, Tianlong Chen*, Jiayi Shen, Huan Yuan, Jianchao Tan, Sen Yang, Ji Li…☆55Updated 2 years ago
- ☆13Updated 4 years ago
- [ICML 2022] "DepthShrinker: A New Compression Paradigm Towards Boosting Real-Hardware Efficiency of Compact Neural Networks", by Yonggan …☆72Updated 3 years ago
- ☆28Updated 2 years ago
- We propose a lossless compression algorithm based on the NTK matrix for DNN. The compressed network yields asymptotically the same NTK a…☆26Updated 2 years ago
- Official PyTorch implementation of CD-MOE☆12Updated 8 months ago
- ☆17Updated 3 years ago
- [ICLR 2022] "As-ViT: Auto-scaling Vision Transformers without Training" by Wuyang Chen, Wei Huang, Xianzhi Du, Xiaodan Song, Zhangyang Wa…☆76Updated 3 years ago
- ☆26Updated 3 years ago
- [ICCV 2023] Efficient Joint Optimization of Layer-Adaptive Weight Pruning in Deep Neural Networks☆25Updated 2 years ago