mit-han-lab / neurips-micronetLinks
[JMLR'20] NeurIPS 2019 MicroNet Challenge Efficient Language Modeling, Champion
☆40Updated 4 years ago
Alternatives and similar repositories for neurips-micronet
Users that are interested in neurips-micronet are comparing it to the libraries listed below
Sorting:
- Code for paper "SWALP: Stochastic Weight Averaging forLow-Precision Training".☆62Updated 6 years ago
- Pytorch library for factorized L0-based pruning.☆45Updated last year
- PyTorch implementation of HashedNets☆36Updated 2 years ago
- Implementation of Kronecker Attention in Pytorch☆19Updated 4 years ago
- A "gym" style toolkit for building lightweight NAS systems.☆13Updated 2 years ago
- ☆70Updated 5 years ago
- Block-sparse primitives for PyTorch☆155Updated 4 years ago
- Block Sparse movement pruning☆79Updated 4 years ago
- Online Normalization for Training Neural Networks (Companion Repository)☆82Updated 4 years ago
- Simply Numpy implementation of the FAVOR+ attention mechanism, https://teddykoker.com/2020/11/performers/☆38Updated 4 years ago
- Code for BlockSwap (ICLR 2020).☆33Updated 4 years ago
- ☆41Updated 3 years ago
- custom cuda kernel for {2, 3}d relative attention with pytorch wrapper☆43Updated 5 years ago
- ☆22Updated 7 years ago
- Official PyTorch implementation of "Rapid Neural Architecture Search by Learning to Generate Graphs from Datasets" (ICLR 2021)☆64Updated 10 months ago
- All about acceleration and compression of Deep Neural Networks☆33Updated 5 years ago
- A GPT, made only of MLPs, in Jax☆58Updated 3 years ago
- Successfully training approximations to full-rank matrices for efficiency in deep learning.☆17Updated 4 years ago
- A pytorch implementation for the LSTM experiments in the paper: Why Gradient Clipping Accelerates Training: A Theoretical Justification f…☆46Updated 5 years ago
- ICML2019 Accepted Paper. Overcoming Multi-Model Forgetting☆13Updated 6 years ago
- Code release to reproduce ASHA experiments from "Random Search and Reproducibility for NAS."☆22Updated 5 years ago
- [NeurIPS 2019] E2-Train: Training State-of-the-art CNNs with Over 80% Less Energy☆21Updated 5 years ago
- Implementation of a Transformer using ReLA (Rectified Linear Attention) from https://arxiv.org/abs/2104.07012☆50Updated 3 years ago
- PyProf2: PyTorch Profiling tool☆82Updated 4 years ago
- Identify a binary weight or binary weight and activation subnetwork within a randomly initialized network by only pruning and binarizing …☆52Updated 3 years ago
- Parameter Efficient Transfer Learning with Diff Pruning☆73Updated 4 years ago
- Code accompanying the NeurIPS 2020 paper: WoodFisher (Singh & Alistarh, 2020)☆52Updated 4 years ago
- The collection of training tricks of binarized neural networks.☆72Updated 4 years ago
- Code publication to the paper "Normalized Attention Without Probability Cage"☆16Updated 3 years ago
- Compression of NMT transformer model with tensor methods☆48Updated 5 years ago