facebookresearch / bitsandbytes
Library for 8-bit optimizers and quantization routines.
β717Updated 2 years ago
Alternatives and similar repositories for bitsandbytes:
Users that are interested in bitsandbytes are comparing it to the libraries listed below
- Fast Block Sparse Matrices for Pytorchβ545Updated 4 years ago
- Prune a model while finetuning or training.β402Updated 2 years ago
- Flexible components pairing π€ Transformers with Pytorch Lightningβ609Updated 2 years ago
- Slicing a PyTorch Tensor Into Parallel Shardsβ298Updated 3 years ago
- Accelerate PyTorch models with ONNX Runtimeβ358Updated last month
- FastFormers - highly efficient transformer models for NLUβ704Updated last week
- A GPU performance profiling tool for PyTorch modelsβ505Updated 3 years ago
- Efficient, scalable and enterprise-grade CPU/GPU inference server for π€ Hugging Face transformer models πβ1,683Updated 5 months ago
- Recipes are a standard, well supported set of blueprints for machine learning engineers to rapidly train models using the latest researchβ¦β311Updated last week
- [Prototype] Tools for the concurrent manipulation of variably sized Tensors.β252Updated 2 years ago
- Implementation of a Transformer, but completely in Tritonβ263Updated 2 years ago
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.β1,038Updated 11 months ago
- Code for the ALiBi method for transformer language models (ICLR 2022)β520Updated last year
- An efficient implementation of the popular sequence models for text generation, summarization, and translation tasks. https://arxiv.org/pβ¦β432Updated 2 years ago
- Understanding the Difficulty of Training Transformersβ328Updated 2 years ago
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackabβ¦β1,560Updated last year
- Named tensors with first-class dimensions for PyTorchβ321Updated last year
- Parallelformers: An Efficient Model Parallelization Toolkit for Deploymentβ785Updated last year
- A library to inspect and extract intermediate layers of PyTorch models.β472Updated 2 years ago
- Memory Efficient Attention (O(sqrt(n)) for Jax and PyTorchβ182Updated 2 years ago
- Repository containing code for "How to Train BERT with an Academic Budget" paperβ312Updated last year
- PyTorch dataset extended with map, cache etc. (tensorflow.data like)β329Updated 2 years ago
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(nΒ²) Memory"β375Updated last year
- Efficient, check-pointed data loading for deep learning with massive data sets.β205Updated last year
- Long Range Arena for Benchmarking Efficient Transformersβ748Updated last year
- Accelerate training by storing parameters in one contiguous chunk of memory.β291Updated 4 years ago
- DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight β¦β235Updated last year
- Is the attention layer even necessary? (https://arxiv.org/abs/2105.02723)β484Updated 3 years ago
- Helps you write algorithms in PyTorch that adapt to the available (CUDA) memoryβ435Updated 7 months ago
- Cockpit: A Practical Debugging Tool for Training Deep Neural Networksβ474Updated 2 years ago