HazyResearch / butterflyLinks
Butterfly matrix multiplication in PyTorch
☆168Updated last year
Alternatives and similar repositories for butterfly
Users that are interested in butterfly are comparing it to the libraries listed below
Sorting:
- Block-sparse primitives for PyTorch☆155Updated 4 years ago
- Distributed K-FAC preconditioner for PyTorch☆87Updated last week
- Structured matrices for compressing neural networks☆66Updated last year
- ☆205Updated 2 years ago
- ☆228Updated 3 months ago
- Low Precision Arithmetic Simulation in PyTorch☆278Updated last year
- Customized matrix multiplication kernels☆54Updated 3 years ago
- ☆36Updated 5 months ago
- JMP is a Mixed Precision library for JAX.☆199Updated 4 months ago
- A library of GPU kernels for sparse matrix operations.☆264Updated 4 years ago
- ☆167Updated 11 months ago
- Code for paper "SWALP: Stochastic Weight Averaging forLow-Precision Training".☆62Updated 6 years ago
- Training neural networks in TensorFlow 2.0 with 5x less memory☆131Updated 3 years ago
- CUDA kernels for generalized matrix-multiplication in PyTorch☆82Updated 3 years ago
- ☆10Updated 3 years ago
- Sparsity support for PyTorch☆35Updated 2 months ago
- CUDA templates for tile-sparse matrix multiplication based on CUTLASS.☆51Updated 7 years ago
- Implementations and checkpoints for ResNet, Wide ResNet, ResNeXt, ResNet-D, and ResNeSt in JAX (Flax).☆110Updated 3 years ago
- PyTorch implementation of HashedNets☆36Updated 2 years ago
- ASDL: Automatic Second-order Differentiation Library for PyTorch☆187Updated 6 months ago
- Code for the article "What if Neural Networks had SVDs?", to be presented as a spotlight paper at NeurIPS 2020.☆75Updated 10 months ago
- ADAHESSIAN: An Adaptive Second Order Optimizer for Machine Learning☆276Updated 2 years ago
- PyTorch AutoNEB implementation to identify minimum energy paths, e.g. in neural network loss landscapes☆55Updated 2 years ago
- End-to-end training of sparse deep neural networks with little-to-no performance loss.☆322Updated 2 years ago
- Implementation for the paper "Latent Weights Do Not Exist: Rethinking Binarized Neural Network Optimization"☆74Updated 5 years ago
- Code accompanying the NeurIPS 2020 paper: WoodFisher (Singh & Alistarh, 2020)☆52Updated 4 years ago
- DeepHoyer: Learning Sparser Neural Network with Differentiable Scale-Invariant Sparsity Measures☆33Updated 4 years ago
- A research library for pytorch-based neural network pruning, compression, and more.☆162Updated 2 years ago
- A library for unit scaling in PyTorch☆125Updated 6 months ago
- Experiment of using Tangent to autodiff triton☆79Updated last year