facebookresearch / open_lth
A repository in preparation for open-sourcing lottery ticket hypothesis code.
☆632Updated 2 years ago
Alternatives and similar repositories for open_lth
Users that are interested in open_lth are comparing it to the libraries listed below
Sorting:
- This repository contains a Pytorch implementation of the paper "The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks"…☆331Updated last year
- ☆144Updated 2 years ago
- End-to-end training of sparse deep neural networks with little-to-no performance loss.☆321Updated 2 years ago
- ☆226Updated 9 months ago
- PyTorch library to facilitate development and standardized evaluation of neural network pruning methods.☆429Updated last year
- ☆191Updated 4 years ago
- Sparse learning library and sparse momentum resources.☆380Updated 2 years ago
- A reimplementation of "The Lottery Ticket Hypothesis" (Frankle and Carbin) on MNIST.☆720Updated 4 years ago
- Efficient PyTorch Hessian eigendecomposition tools!☆373Updated last year
- BackPACK - a backpropagation package built on top of PyTorch which efficiently computes quantities other than the gradient.☆581Updated 4 months ago
- PyHessian is a Pytorch library for second-order based analysis and training of Neural Networks☆735Updated last year
- Fast Block Sparse Matrices for Pytorch☆545Updated 4 years ago
- [ICLR 2020] Drawing Early-Bird Tickets: Toward More Efficient Training of Deep Networks☆138Updated 4 years ago
- PyTorch layer-by-layer model profiler☆607Updated 3 years ago
- ☆157Updated 2 years ago
- Mode Connectivity and Fast Geometric Ensembles in PyTorch☆270Updated 2 years ago
- Cockpit: A Practical Debugging Tool for Training Deep Neural Networks☆477Updated 2 years ago
- A drop-in replacement for CIFAR-10.☆241Updated 4 years ago
- Code for Neural Architecture Search without Training (ICML 2021)☆465Updated 3 years ago
- Implementation for the Lookahead Optimizer.☆241Updated 3 years ago
- ☆470Updated 2 weeks ago
- A LARS implementation in PyTorch☆345Updated 5 years ago
- A machine learning benchmark of in-the-wild distribution shifts, with data loaders, evaluators, and default models.☆564Updated last year
- Implementation of Estimating Training Data Influence by Tracing Gradient Descent (NeurIPS 2020)☆231Updated 3 years ago
- Official PyTorch implementation of Dreaming to Distill: Data-free Knowledge Transfer via DeepInversion (CVPR 2020)☆503Updated 2 years ago
- Reduce end to end training time from days to hours (or hours to minutes), and energy requirements/costs by an order of magnitude using co…☆330Updated last year
- This repository contains the results for the paper: "Descending through a Crowded Valley - Benchmarking Deep Learning Optimizers"☆180Updated 3 years ago
- Understanding Training Dynamics of Deep ReLU Networks☆291Updated 3 months ago
- Pruning Neural Networks with Taylor criterion in Pytorch☆318Updated 5 years ago
- Gradient based Hyperparameter Tuning library in PyTorch☆289Updated 4 years ago