google-research / rigl
End-to-end training of sparse deep neural networks with little-to-no performance loss.
☆315Updated last year
Related projects: ⓘ
- ☆142Updated last year
- ☆215Updated last month
- Sparse learning library and sparse momentum resources.☆377Updated 2 years ago
- PyTorch library to facilitate development and standardized evaluation of neural network pruning methods.☆420Updated last year
- A research library for pytorch-based neural network pruning, compression, and more.☆161Updated last year
- Code for Neural Architecture Search without Training (ICML 2021)☆454Updated 3 years ago
- A repository in preparation for open-sourcing lottery ticket hypothesis code.☆625Updated 2 years ago
- Fast Block Sparse Matrices for Pytorch☆546Updated 3 years ago
- ☆182Updated 3 years ago
- [ICLR 2020] Drawing Early-Bird Tickets: Toward More Efficient Training of Deep Networks☆137Updated 3 years ago
- Naszilla is a Python library for neural architecture search (NAS)☆303Updated last year
- Efficient PyTorch Hessian eigendecomposition tools!☆355Updated 6 months ago
- Butterfly matrix multiplication in PyTorch☆160Updated 11 months ago
- Mode Connectivity and Fast Geometric Ensembles in PyTorch☆262Updated last year
- A drop-in replacement for CIFAR-10.☆234Updated 3 years ago
- ☆153Updated 2 years ago
- [ICLR 2021] "Neural Architecture Search on ImageNet in Four GPU Hours: A Theoretically Inspired Perspective" by Wuyang Chen, Xinyu Gong, …☆164Updated 2 years ago
- ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learning☆263Updated last year
- SNIP: SINGLE-SHOT NETWORK PRUNING BASED ON CONNECTION SENSITIVITY☆108Updated 5 years ago
- A Re-implementation of Fixed-update Initialization☆150Updated 5 years ago
- Collection of the latest, greatest, deep learning optimizers (for Pytorch) - CNN, NLP suitable☆212Updated 3 years ago
- Block-sparse primitives for PyTorch☆147Updated 3 years ago
- Pruning Neural Networks with Taylor criterion in Pytorch☆310Updated 4 years ago
- PyTorch layer-by-layer model profiler☆608Updated 3 years ago
- A large scale study of Knowledge Distillation.☆217Updated 4 years ago
- Soft Threshold Weight Reparameterization for Learnable Sparsity☆88Updated last year
- Estimate/count FLOPS for a given neural network using pytorch☆303Updated 2 years ago
- ☆81Updated 4 years ago
- This repository contains the results for the paper: "Descending through a Crowded Valley - Benchmarking Deep Learning Optimizers"☆176Updated 3 years ago
- Pytorch implementation of the paper "SNIP: Single-shot Network Pruning based on Connection Sensitivity" by Lee et al.☆101Updated 5 years ago