facebookresearch / bitsandbytes
Library for 8-bit optimizers and quantization routines.
☆717Updated 2 years ago
Alternatives and similar repositories for bitsandbytes:
Users that are interested in bitsandbytes are comparing it to the libraries listed below
- Fast Block Sparse Matrices for Pytorch☆546Updated 4 years ago
- Slicing a PyTorch Tensor Into Parallel Shards☆298Updated 3 years ago
- Prune a model while finetuning or training.☆397Updated 2 years ago
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,550Updated 11 months ago
- Flexible components pairing 🤗 Transformers with Pytorch Lightning☆612Updated 2 years ago
- A GPU performance profiling tool for PyTorch models☆500Updated 3 years ago
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,019Updated 9 months ago
- Code for the ALiBi method for transformer language models (ICLR 2022)☆512Updated last year
- Helps you write algorithms in PyTorch that adapt to the available (CUDA) memory☆431Updated 5 months ago
- Accelerate PyTorch models with ONNX Runtime☆357Updated 4 months ago
- FastFormers - highly efficient transformer models for NLU☆703Updated last year
- A library to inspect and extract intermediate layers of PyTorch models.☆470Updated 2 years ago
- Memory Efficient Attention (O(sqrt(n)) for Jax and PyTorch☆180Updated 2 years ago
- Recipes are a standard, well supported set of blueprints for machine learning engineers to rapidly train models using the latest research…☆302Updated this week
- [Prototype] Tools for the concurrent manipulation of variably sized Tensors.☆252Updated 2 years ago
- A library that contains a rich collection of performant PyTorch model metrics, a simple interface to create new metrics, a toolkit to fac…☆225Updated last week
- Implementation of a Transformer, but completely in Triton☆253Updated 2 years ago
- Accelerate training by storing parameters in one contiguous chunk of memory.☆292Updated 4 years ago
- maximal update parametrization (µP)☆1,437Updated 6 months ago
- An efficient implementation of the popular sequence models for text generation, summarization, and translation tasks. https://arxiv.org/p…☆429Updated 2 years ago
- DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight …☆235Updated last year
- functorch is JAX-like composable function transforms for PyTorch.☆1,405Updated this week
- Sequence modeling with Mega.☆297Updated 2 years ago
- Long Range Arena for Benchmarking Efficient Transformers☆740Updated last year
- Understanding the Difficulty of Training Transformers☆328Updated 2 years ago
- Experimental ground for optimizing memory of pytorch models☆361Updated 6 years ago
- Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀☆1,673Updated 3 months ago
- Implementation of the Adan (ADAptive Nesterov momentum algorithm) Optimizer in Pytorch☆251Updated 2 years ago
- Repository containing code for "How to Train BERT with an Academic Budget" paper☆310Updated last year
- Pipeline Parallelism for PyTorch☆739Updated 5 months ago