automl / HW-GPT-BenchLinks
HW-GPT-Bench: Hardware-Aware Architecture Benchmark for Language Models
☆21Updated 8 months ago
Alternatives and similar repositories for HW-GPT-Bench
Users that are interested in HW-GPT-Bench are comparing it to the libraries listed below
Sorting:
- ☆78Updated last year
- ☆29Updated last year
- The first collection of surrogate benchmarks for Joint Architecture and Hyperparameter Search.☆15Updated 2 years ago
- Introducing diverse tasks for NAS☆50Updated 2 years ago
- [ICLR 2023] 'Revisiting Pruning At Initialization Through The Lens of Ramanujan Graph" by Duc Hoang, Shiwei Liu, Radu Marculescu, Atlas W…☆14Updated 2 years ago
- Implementation of Continuous Sparsification, a method for pruning and ticket search in deep networks☆33Updated 3 years ago
- This repository contains the publishable code for CVPR 2021 paper TransNAS-Bench-101: Improving Transferrability and Generalizability of …☆23Updated 2 years ago
- ☆26Updated 2 years ago
- [ICML 2021] "Do We Actually Need Dense Over-Parameterization? In-Time Over-Parameterization in Sparse Training" by Shiwei Liu, Lu Yin, De…☆45Updated last year
- Lightweight torch implementation of rigl, a sparse-to-sparse optimizer.☆57Updated 3 years ago
- Generic Neural Architecture Search via Regression (NeurIPS'21 Spotlight)☆36Updated 2 years ago
- [ICML2022] Training Your Sparse Neural Network Better with Any Mask. Ajay Jaiswal, Haoyu Ma, Tianlong Chen, ying Ding, and Zhangyang Wang☆28Updated 3 years ago
- Code for our ICLR'2021 paper "DrNAS: Dirichlet Neural Architecture Search"☆43Updated 4 years ago
- [Neurips 2021] Sparse Training via Boosting Pruning Plasticity with Neuroregeneration☆31Updated 2 years ago
- [IJCAI'22 Survey] Recent Advances on Neural Network Pruning at Initialization.☆59Updated last year
- Soft Threshold Weight Reparameterization for Learnable Sparsity☆91Updated 2 years ago
- Code accompanying the NeurIPS 2020 paper: WoodFisher (Singh & Alistarh, 2020)☆53Updated 4 years ago
- ☆18Updated 5 years ago
- ☆13Updated 2 years ago
- Good Subnetworks Provably Exist: Pruning via Greedy Forward Selection☆21Updated 4 years ago
- Comparison of method "Pruning at initialization prior to training" (Synflow/SNIP/GraSP) in PyTorch☆16Updated last year
- [NeurIPS‘2021] "MEST: Accurate and Fast Memory-Economic Sparse Training Framework on the Edge", Geng Yuan, Xiaolong Ma, Yanzhi Wang et al…☆18Updated 3 years ago
- Code for Sanity-Checking Pruning Methods: Random Tickets can Win the Jackpot☆42Updated 4 years ago
- [NeurIPS 2020] "Does Unsupervised Architecture Representation Learning Help Neural Architecture Search?" by Shen Yan, Yu Zheng, Wei Ao, X…☆49Updated 4 years ago
- ☆26Updated 3 years ago
- [ICLR '21] Interpretable Neural Architecture Search using Bayesian Optimisation with Weisfiler-Lehman Kernel (NAS-BOWL)☆24Updated 3 years ago
- [ICLR2021 Outstanding Paper] Rethinking Architecture Selection in Differentiable NAS☆105Updated 3 years ago
- ☆14Updated 4 years ago
- Smooth Variational Graph Embeddings for Efficient Neural Architecture Search☆16Updated 2 years ago
- [ICLR2023] NTK-SAP: Improving neural network pruning by aligning training dynamics☆19Updated 2 years ago