rahulvigneswaran / Lottery-Ticket-Hypothesis-in-PytorchLinks
This repository contains a Pytorch implementation of the paper "The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks" by Jonathan Frankle and Michael Carbin that can be easily adapted to any model/dataset.
☆331Updated last year
Alternatives and similar repositories for Lottery-Ticket-Hypothesis-in-Pytorch
Users that are interested in Lottery-Ticket-Hypothesis-in-Pytorch are comparing it to the libraries listed below
Sorting:
- ☆226Updated 10 months ago
- A repository in preparation for open-sourcing lottery ticket hypothesis code.☆633Updated 2 years ago
- ☆191Updated 4 years ago
- ☆144Updated 2 years ago
- PyTorch library to facilitate development and standardized evaluation of neural network pruning methods.☆430Updated last year
- PyHessian is a Pytorch library for second-order based analysis and training of Neural Networks☆740Updated last year
- Mode Connectivity and Fast Geometric Ensembles in PyTorch☆271Updated 2 years ago
- pytorch-tiny-imagenet☆176Updated last year
- Pytorch implementation of the paper "SNIP: Single-shot Network Pruning based on Connection Sensitivity" by Lee et al.☆108Updated 6 years ago
- SNIP: SINGLE-SHOT NETWORK PRUNING BASED ON CONNECTION SENSITIVITY☆114Updated 5 years ago
- Pruning Neural Networks with Taylor criterion in Pytorch☆318Updated 5 years ago
- Sparse learning library and sparse momentum resources.☆381Updated 2 years ago
- Efficient PyTorch Hessian eigendecomposition tools!☆374Updated last year
- End-to-end training of sparse deep neural networks with little-to-no performance loss.☆322Updated 2 years ago
- Official PyTorch implementation of Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion (CVPR 2020)☆504Updated 2 years ago
- This repository contains code to replicate the experiments given in NeurIPS 2019 paper "One ticket to win them all: generalizing lottery …☆51Updated 10 months ago
- Approximating neural network loss landscapes in low-dimensional parameter subspaces for PyTorch☆332Updated last year
- A reimplementation of "The Lottery Ticket Hypothesis" (Frankle and Carbin) on MNIST.☆720Updated 4 years ago
- Learning Sparse Neural Networks through L0 regularization☆240Updated 4 years ago
- Code for "Picking Winning Tickets Before Training by Preserving Gradient Flow" https://openreview.net/pdf?id=SkgsACVKPH☆104Updated 5 years ago
- Soft Threshold Weight Reparameterization for Learnable Sparsity☆90Updated 2 years ago
- A drop-in replacement for CIFAR-10.☆241Updated 4 years ago
- Pretrained models on CIFAR10/100 in PyTorch☆355Updated 2 weeks ago
- Rethinking the Value of Network Pruning (Pytorch) (ICLR 2019)☆1,514Updated 4 years ago
- A large scale study of Knowledge Distillation.☆220Updated 5 years ago
- Summary, Code for Deep Neural Network Quantization☆548Updated 7 months ago
- Using Teacher Assistants to Improve Knowledge Distillation: https://arxiv.org/pdf/1902.03393.pdf☆258Updated 5 years ago
- PyTorch Implementation of Weights Pruning☆185Updated 7 years ago
- Knowledge Distillation: CVPR2020 Oral, Revisiting Knowledge Distillation via Label Smoothing Regularization☆585Updated 2 years ago
- ☆157Updated 2 years ago