facebookresearch / bitsandbytes
Library for 8-bit optimizers and quantization routines.
β716Updated 2 years ago
Alternatives and similar repositories for bitsandbytes:
Users that are interested in bitsandbytes are comparing it to the libraries listed below
- Prune a model while finetuning or training.β402Updated 2 years ago
- Fast Block Sparse Matrices for Pytorchβ545Updated 4 years ago
- Flexible components pairing π€ Transformers with Pytorch Lightningβ609Updated 2 years ago
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.β1,040Updated last year
- FastFormers - highly efficient transformer models for NLUβ706Updated last month
- Implementation of a Transformer, but completely in Tritonβ263Updated 3 years ago
- Helps you write algorithms in PyTorch that adapt to the available (CUDA) memoryβ435Updated 7 months ago
- Recipes are a standard, well supported set of blueprints for machine learning engineers to rapidly train models using the latest researchβ¦β312Updated last week
- Slicing a PyTorch Tensor Into Parallel Shardsβ298Updated 3 years ago
- Long Range Arena for Benchmarking Efficient Transformersβ751Updated last year
- [Prototype] Tools for the concurrent manipulation of variably sized Tensors.β251Updated 2 years ago
- A GPU performance profiling tool for PyTorch modelsβ506Updated 3 years ago
- Named tensors with first-class dimensions for PyTorchβ320Updated last year
- Efficient, scalable and enterprise-grade CPU/GPU inference server for π€ Hugging Face transformer models πβ1,684Updated 6 months ago
- Code for the ALiBi method for transformer language models (ICLR 2022)β522Updated last year
- Accelerate PyTorch models with ONNX Runtimeβ359Updated 2 months ago
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackabβ¦β1,565Updated last year
- β349Updated last year
- A library to inspect and extract intermediate layers of PyTorch models.β472Updated 2 years ago
- DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight β¦β235Updated last year
- maximal update parametrization (Β΅P)β1,498Updated 9 months ago
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.β992Updated 8 months ago
- A library that contains a rich collection of performant PyTorch model metrics, a simple interface to create new metrics, a toolkit to facβ¦β230Updated 3 months ago
- Implementation of https://arxiv.org/abs/1904.00962β374Updated 4 years ago
- Implementation of Flash Attention in Jaxβ206Updated last year
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(nΒ²) Memory"β377Updated last year
- Run Effective Large Batch Contrastive Learning Beyond GPU/TPU Memory Constraintβ386Updated last year
- Profiling and inspecting memory in pytorchβ1,057Updated 8 months ago
- functorch is JAX-like composable function transforms for PyTorch.β1,422Updated this week
- Parallelformers: An Efficient Model Parallelization Toolkit for Deploymentβ785Updated 2 years ago