mit-han-lab / neurips-micronetLinks
[JMLR'20] NeurIPS 2019 MicroNet Challenge Efficient Language Modeling, Champion
☆41Updated 4 years ago
Alternatives and similar repositories for neurips-micronet
Users that are interested in neurips-micronet are comparing it to the libraries listed below
Sorting:
- ☆69Updated 5 years ago
- Code for paper "SWALP: Stochastic Weight Averaging forLow-Precision Training".☆62Updated 6 years ago
- PyTorch implementation of HashedNets☆37Updated 2 years ago
- Butterfly matrix multiplication in PyTorch☆175Updated 2 years ago
- PyTorch implementation of L2L execution algorithm☆108Updated 2 years ago
- [NeurIPS 2022] DataMUX: Data Multiplexing for Neural Networks☆60Updated 2 years ago
- Block Sparse movement pruning☆81Updated 4 years ago
- Block-sparse primitives for PyTorch☆160Updated 4 years ago
- Code release to reproduce ASHA experiments from "Random Search and Reproducibility for NAS."☆22Updated 6 years ago
- Discovering Neural Wirings (https://arxiv.org/abs/1906.00586)☆136Updated 5 years ago
- Using ideas from product quantization for state-of-the-art neural network compression.☆146Updated 4 years ago
- Official implementation of "UNAS: Differentiable Architecture Search Meets Reinforcement Learning", CVPR 2020 Oral☆61Updated 2 years ago
- [ACL'20] HAT: Hardware-Aware Transformers for Efficient Natural Language Processing☆336Updated last year
- Official code repository of the paper Linear Transformers Are Secretly Fast Weight Programmers.☆108Updated 4 years ago
- [ICLR 2020] Drawing Early-Bird Tickets: Toward More Efficient Training of Deep Networks☆140Updated 5 years ago
- Official PyTorch implementation of "Rapid Neural Architecture Search by Learning to Generate Graphs from Datasets" (ICLR 2021)☆64Updated last year
- ☆22Updated 7 years ago
- [ICML 2020] code for "PowerNorm: Rethinking Batch Normalization in Transformers" https://arxiv.org/abs/2003.07845☆120Updated 4 years ago
- AlphaNet Improved Training of Supernet with Alpha-Divergence☆100Updated 4 years ago
- A GPT, made only of MLPs, in Jax☆58Updated 4 years ago
- ☆41Updated 4 years ago
- Simply Numpy implementation of the FAVOR+ attention mechanism, https://teddykoker.com/2020/11/performers/☆38Updated 4 years ago
- All about acceleration and compression of Deep Neural Networks☆33Updated 6 years ago
- ☆220Updated 2 years ago
- Customized matrix multiplication kernels☆57Updated 3 years ago
- MixPath: A Unified Approach for One-shot Neural Architecture Search☆29Updated 5 years ago
- Pytorch library for factorized L0-based pruning.☆45Updated 2 years ago
- Identify a binary weight or binary weight and activation subnetwork within a randomly initialized network by only pruning and binarizing …☆51Updated 3 years ago
- Train ImageNet in 18 minutes on AWS☆133Updated last year
- DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight …☆236Updated 2 years ago