google-research / riglLinks
End-to-end training of sparse deep neural networks with little-to-no performance loss.
☆324Updated 2 years ago
Alternatives and similar repositories for rigl
Users that are interested in rigl are comparing it to the libraries listed below
Sorting:
- ☆144Updated 2 years ago
- Sparse learning library and sparse momentum resources.☆384Updated 3 years ago
- PyTorch library to facilitate development and standardized evaluation of neural network pruning methods.☆430Updated 2 years ago
- [ICLR 2020] Drawing Early-Bird Tickets: Toward More Efficient Training of Deep Networks☆138Updated 4 years ago
- ☆226Updated last year
- Code for Neural Architecture Search without Training (ICML 2021)☆472Updated 4 years ago
- ☆192Updated 4 years ago
- A research library for pytorch-based neural network pruning, compression, and more.☆162Updated 2 years ago
- Fast Block Sparse Matrices for Pytorch☆548Updated 4 years ago
- A Re-implementation of Fixed-update Initialization☆152Updated 6 years ago
- A repository in preparation for open-sourcing lottery ticket hypothesis code.☆632Updated 2 years ago
- Naszilla is a Python library for neural architecture search (NAS)☆313Updated 2 years ago
- Discovering Neural Wirings (https://arxiv.org/abs/1906.00586)☆137Updated 5 years ago
- ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learning☆277Updated 2 years ago
- Implementation for the Lookahead Optimizer.☆241Updated 3 years ago
- Butterfly matrix multiplication in PyTorch☆174Updated last year
- MONeT framework for reducing memory consumption of DNN training☆173Updated 4 years ago
- 🧀 Pytorch code for the Fromage optimiser.☆125Updated last year
- [ICLR 2021] "Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective" by Wuyang Chen, Xinyu Gong, …☆169Updated 3 years ago
- Block-sparse primitives for PyTorch☆157Updated 4 years ago
- Gradient based Hyperparameter Tuning library in PyTorch☆290Updated 5 years ago
- Code for "Picking Winning Tickets Before Training by Preserving Gradient Flow" https://openreview.net/pdf?id=SkgsACVKPH☆105Updated 5 years ago
- Neural Architecture Transfer (Arxiv'20), PyTorch Implementation☆155Updated 5 years ago
- Is the attention layer even necessary? (https://arxiv.org/abs/2105.02723)☆486Updated 4 years ago
- Estimate/count FLOPS for a given neural network using pytorch☆305Updated 3 years ago
- ☆70Updated 5 years ago
- Accelerate your Neural Architecture Search (NAS) through fast, reproducible and modular research.☆480Updated 9 months ago
- This repository contains the results for the paper: "Descending through a Crowded Valley - Benchmarking Deep Learning Optimizers"☆181Updated 4 years ago
- PyTorch implementation of L2L execution algorithm☆107Updated 2 years ago
- [ACL'20] HAT: Hardware-Aware Transformers for Efficient Natural Language Processing☆336Updated last year