skolai / fewbitLinks
Compression schema for gradients of activations in backward pass
☆44Updated 2 years ago
Alternatives and similar repositories for fewbit
Users that are interested in fewbit are comparing it to the libraries listed below
Sorting:
- Learning to Initialize Neural Networks for Stable and Efficient Training☆138Updated 3 years ago
- PyTorch implementation of L2L execution algorithm☆109Updated 2 years ago
- Official code for "Distributed Deep Learning in Open Collaborations" (NeurIPS 2021)☆117Updated 3 years ago
- A block oriented training approach for inference time optimization.☆34Updated last year
- This is the repo for DenseAttention and DANet - fast and conceptually simple modification of standard attention and Transformer☆19Updated last week
- A library for unit scaling in PyTorch☆133Updated 5 months ago
- ☆222Updated 2 years ago
- Customized matrix multiplication kernels☆57Updated 3 years ago
- Latest Weight Averaging (NeurIPS HITY 2022)☆32Updated 2 years ago
- ☆75Updated 3 years ago
- Experiment of using Tangent to autodiff triton☆81Updated last year
- PyTorch implementation of HashedNets☆38Updated 2 years ago
- sigma-MoE layer☆20Updated 2 years ago
- ☆29Updated 3 years ago
- Code for the note "NF4 Isn't Information Theoretically Optimal (and that's Good)☆21Updated 2 years ago
- Block-sparse primitives for PyTorch☆160Updated 4 years ago
- Implementation of fused cosine similarity attention in the same style as Flash Attention☆219Updated 2 years ago
- ☆59Updated 5 years ago
- Identify a binary weight or binary weight and activation subnetwork within a randomly initialized network by only pruning and binarizing …☆51Updated 3 years ago
- ☆160Updated 2 years ago
- MUSCO: MUlti-Stage COmpression of neural networks☆72Updated 4 years ago
- ☆20Updated last year
- ☆71Updated last year
- Hacks for PyTorch☆19Updated 2 years ago
- The official implementation of the ChordMixer architecture.☆61Updated 2 years ago
- ☆52Updated 2 weeks ago
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆46Updated last year
- ☆21Updated 9 months ago
- ☆36Updated last year
- [JMLR'20] NeurIPS 2019 MicroNet Challenge Efficient Language Modeling, Champion☆41Updated 4 years ago