facebookresearch / bitsandbytesLinks
Library for 8-bit optimizers and quantization routines.
☆715Updated 2 years ago
Alternatives and similar repositories for bitsandbytes
Users that are interested in bitsandbytes are comparing it to the libraries listed below
Sorting:
- Fast Block Sparse Matrices for Pytorch☆547Updated 4 years ago
- Prune a model while finetuning or training.☆403Updated 3 years ago
- Flexible components pairing 🤗 Transformers with Pytorch Lightning☆609Updated 2 years ago
- FastFormers - highly efficient transformer models for NLU☆705Updated 3 months ago
- Memory Efficient Attention (O(sqrt(n)) for Jax and PyTorch☆184Updated 2 years ago
- A library to inspect and extract intermediate layers of PyTorch models.☆473Updated 3 years ago
- Implementation of a Transformer, but completely in Triton☆268Updated 3 years ago
- [Prototype] Tools for the concurrent manipulation of variably sized Tensors.☆251Updated 2 years ago
- Slicing a PyTorch Tensor Into Parallel Shards☆299Updated 2 weeks ago
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,572Updated last year
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,053Updated last year
- Accelerate PyTorch models with ONNX Runtime☆362Updated 4 months ago
- A GPU performance profiling tool for PyTorch models☆503Updated 3 years ago
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆379Updated last year
- Long Range Arena for Benchmarking Efficient Transformers☆757Updated last year
- Profiling and inspecting memory in pytorch☆1,061Updated 10 months ago
- Implementation of https://arxiv.org/abs/1904.00962☆377Updated 4 years ago
- Official PyTorch Implementation of Long-Short Transformer (NeurIPS 2021).☆225Updated 3 years ago
- An efficient implementation of the popular sequence models for text generation, summarization, and translation tasks. https://arxiv.org/p…☆433Updated 2 years ago
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch☆866Updated last year
- Named tensors with first-class dimensions for PyTorch☆331Updated 2 years ago
- Recipes are a standard, well supported set of blueprints for machine learning engineers to rapidly train models using the latest research…☆321Updated this week
- ☆355Updated last year
- Efficient, check-pointed data loading for deep learning with massive data sets.☆208Updated 2 years ago
- Cockpit: A Practical Debugging Tool for Training Deep Neural Networks☆480Updated 2 years ago
- Implementation of the Adan (ADAptive Nesterov momentum algorithm) Optimizer in Pytorch☆252Updated 2 years ago
- DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight …☆235Updated 2 years ago
- Is the attention layer even necessary? (https://arxiv.org/abs/2105.02723)☆486Updated 4 years ago
- Code for the ALiBi method for transformer language models (ICLR 2022)☆535Updated last year
- Understanding the Difficulty of Training Transformers☆329Updated 3 years ago