skolai / fewbitLinks
Compression schema for gradients of activations in backward pass
☆44Updated 2 years ago
Alternatives and similar repositories for fewbit
Users that are interested in fewbit are comparing it to the libraries listed below
Sorting:
- Learning to Initialize Neural Networks for Stable and Efficient Training☆139Updated 3 years ago
- PyTorch implementation of L2L execution algorithm☆109Updated 2 years ago
- Latest Weight Averaging (NeurIPS HITY 2022)☆32Updated 2 years ago
- Experiment of using Tangent to autodiff triton☆80Updated last year
- Customized matrix multiplication kernels☆57Updated 3 years ago
- ☆49Updated last month
- A library for unit scaling in PyTorch☆132Updated 5 months ago
- ☆121Updated last year
- ☆221Updated 2 years ago
- ☆75Updated 3 years ago
- Official code for "Distributed Deep Learning in Open Collaborations" (NeurIPS 2021)☆117Updated 3 years ago
- Lightweight knowledge distillation pipeline☆28Updated 4 years ago
- sigma-MoE layer☆20Updated last year
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆46Updated last year
- ☆29Updated 3 years ago
- This is the repo for DenseAttention and DANet - fast and conceptually simple modification of standard attention and Transformer☆19Updated this week
- Hacks for PyTorch☆19Updated 2 years ago
- ☆59Updated 5 years ago
- Official implementation of the paper "You Do Not Fully Utilize Transformer's Representation Capacity"☆31Updated 6 months ago
- ☆70Updated last year
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆81Updated 2 years ago
- ☆10Updated 3 years ago
- DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight …☆237Updated 2 years ago
- Little article showing how to load pytorch's models with linear memory consumption☆34Updated 3 years ago
- ☆20Updated 8 months ago
- Easy-to-use AdaHessian optimizer (PyTorch)☆79Updated 5 years ago
- Faster Pytorch bitsandbytes 4bit fp4 nn.Linear ops☆30Updated last year
- ☆52Updated last year
- Fast, Modern, and Low Precision PyTorch Optimizers☆116Updated 3 months ago
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆73Updated last year