facebookresearch / open_lth
A repository in preparation for open-sourcing lottery ticket hypothesis code.
☆630Updated 2 years ago
Alternatives and similar repositories for open_lth:
Users that are interested in open_lth are comparing it to the libraries listed below
- End-to-end training of sparse deep neural networks with little-to-no performance loss.☆321Updated 2 years ago
- Sparse learning library and sparse momentum resources.☆380Updated 2 years ago
- A reimplementation of "The Lottery Ticket Hypothesis" (Frankle and Carbin) on MNIST.☆719Updated 4 years ago
- This repository contains a Pytorch implementation of the paper "The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks"…☆330Updated last year
- ☆225Updated 8 months ago
- ☆190Updated 4 years ago
- ☆144Updated 2 years ago
- Mode Connectivity and Fast Geometric Ensembles in PyTorch☆270Updated 2 years ago
- BackPACK - a backpropagation package built on top of PyTorch which efficiently computes quantities other than the gradient.☆578Updated 3 months ago
- [ICLR 2020] Drawing Early-Bird Tickets: Toward More Efficient Training of Deep Networks☆137Updated 4 years ago
- PyHessian is a Pytorch library for second-order based analysis and training of Neural Networks☆731Updated last year
- Efficient PyTorch Hessian eigendecomposition tools!☆370Updated last year
- PyTorch library to facilitate development and standardized evaluation of neural network pruning methods.☆429Updated last year
- Fast Block Sparse Matrices for Pytorch☆545Updated 4 years ago
- Code for Neural Architecture Search without Training (ICML 2021)☆465Updated 3 years ago
- ☆468Updated 8 months ago
- This repository contains the results for the paper: "Descending through a Crowded Valley - Benchmarking Deep Learning Optimizers"☆180Updated 3 years ago
- Gradient based Hyperparameter Tuning library in PyTorch☆289Updated 4 years ago
- ☆157Updated 2 years ago
- higher is a pytorch library allowing users to obtain higher order gradients over losses spanning training loops rather than individual tr…☆1,613Updated 3 years ago
- Reduce end to end training time from days to hours (or hours to minutes), and energy requirements/costs by an order of magnitude using co…☆330Updated last year
- A drop-in replacement for CIFAR-10.☆240Updated 4 years ago
- Implementation of Estimating Training Data Influence by Tracing Gradient Descent (NeurIPS 2020)☆230Updated 3 years ago
- Project site for "Your Classifier is Secretly an Energy-Based Model and You Should Treat it Like One"☆422Updated 2 years ago
- Naszilla is a Python library for neural architecture search (NAS)☆309Updated 2 years ago
- A Harder ImageNet Test Set (CVPR 2021)☆605Updated last year
- ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learning☆274Updated 2 years ago
- Understanding Training Dynamics of Deep ReLU Networks☆289Updated 2 months ago
- PyTorch layer-by-layer model profiler☆606Updated 3 years ago
- A research library for pytorch-based neural network pruning, compression, and more.☆160Updated 2 years ago