google-research / rigl
End-to-end training of sparse deep neural networks with little-to-no performance loss.
☆320Updated 2 years ago
Alternatives and similar repositories for rigl:
Users that are interested in rigl are comparing it to the libraries listed below
- Sparse learning library and sparse momentum resources.☆380Updated 2 years ago
- ☆144Updated 2 years ago
- Fast Block Sparse Matrices for Pytorch☆546Updated 4 years ago
- ☆225Updated 8 months ago
- [ICLR 2020] Drawing Early-Bird Tickets: Toward More Efficient Training of Deep Networks☆137Updated 4 years ago
- PyTorch library to facilitate development and standardized evaluation of neural network pruning methods.☆428Updated last year
- ☆189Updated 4 years ago
- Code for Neural Architecture Search without Training (ICML 2021)☆465Updated 3 years ago
- A repository in preparation for open-sourcing lottery ticket hypothesis code.☆629Updated 2 years ago
- A research library for pytorch-based neural network pruning, compression, and more.☆160Updated 2 years ago
- ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learning☆271Updated 2 years ago
- Code for "Picking Winning Tickets Before Training by Preserving Gradient Flow" https://openreview.net/pdf?id=SkgsACVKPH☆103Updated 5 years ago
- Efficient PyTorch Hessian eigendecomposition tools!☆370Updated last year
- Discovering Neural Wirings (https://arxiv.org/abs/1906.00586)☆137Updated 5 years ago
- Estimate/count FLOPS for a given neural network using pytorch☆303Updated 2 years ago
- Butterfly matrix multiplication in PyTorch☆168Updated last year
- PyTorch layer-by-layer model profiler☆606Updated 3 years ago
- Pruning Neural Networks with Taylor criterion in Pytorch☆316Updated 5 years ago
- SNIP: SINGLE-SHOT NETWORK PRUNING BASED ON CONNECTION SENSITIVITY☆113Updated 5 years ago
- ☆70Updated 5 years ago
- Pytorch implementation of the paper "SNIP: Single-shot Network Pruning based on Connection Sensitivity" by Lee et al.☆107Updated 5 years ago
- Naszilla is a Python library for neural architecture search (NAS)☆307Updated 2 years ago
- Block-sparse primitives for PyTorch☆154Updated 3 years ago
- Soft Threshold Weight Reparameterization for Learnable Sparsity☆88Updated 2 years ago
- MONeT framework for reducing memory consumption of DNN training☆173Updated 3 years ago
- A Re-implementation of Fixed-update Initialization☆152Updated 5 years ago
- Code release for paper "Random Search and Reproducibility for NAS"☆167Updated 5 years ago
- ☆157Updated 2 years ago
- Mode Connectivity and Fast Geometric Ensembles in PyTorch☆269Updated 2 years ago
- Is the attention layer even necessary? (https://arxiv.org/abs/2105.02723)☆484Updated 3 years ago