facebookresearch / bitsandbytesLinks
Library for 8-bit optimizers and quantization routines.
☆781Updated 3 years ago
Alternatives and similar repositories for bitsandbytes
Users that are interested in bitsandbytes are comparing it to the libraries listed below
Sorting:
- Prune a model while finetuning or training.☆405Updated 3 years ago
- Fast Block Sparse Matrices for Pytorch☆550Updated 4 years ago
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,584Updated last year
- A Python-level JIT compiler designed to make unmodified PyTorch programs faster.☆1,071Updated last year
- Flexible components pairing 🤗 Transformers with Pytorch Lightning☆613Updated 3 years ago
- Slicing a PyTorch Tensor Into Parallel Shards☆300Updated 6 months ago
- Implementation of a Transformer, but completely in Triton☆277Updated 3 years ago
- FastFormers - highly efficient transformer models for NLU☆709Updated 9 months ago
- A PyTorch repo for data loading and utilities to be shared by the PyTorch domain libraries.☆1,245Updated this week
- A library to inspect and extract intermediate layers of PyTorch models.☆475Updated 3 years ago
- Parallelformers: An Efficient Model Parallelization Toolkit for Deployment☆792Updated 2 years ago
- Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀☆1,689Updated last year
- [Prototype] Tools for the concurrent manipulation of variably sized Tensors.☆250Updated 3 years ago
- Accelerate PyTorch models with ONNX Runtime☆368Updated 2 weeks ago
- Memory Efficient Attention (O(sqrt(n)) for Jax and PyTorch☆184Updated 2 years ago
- maximal update parametrization (µP)☆1,644Updated last year
- Helps you write algorithms in PyTorch that adapt to the available (CUDA) memory☆438Updated last year
- Long Range Arena for Benchmarking Efficient Transformers☆771Updated 2 years ago
- A GPU performance profiling tool for PyTorch models☆509Updated 4 years ago
- Cramming the training of a (BERT-type) language model into limited compute.☆1,356Updated last year
- Is the attention layer even necessary? (https://arxiv.org/abs/2105.02723)☆484Updated 4 years ago
- Implementation of a memory efficient multi-head attention as proposed in the paper, "Self-attention Does Not Need O(n²) Memory"☆388Updated 2 years ago
- functorch is JAX-like composable function transforms for PyTorch.☆1,438Updated 4 months ago
- Implementation of https://arxiv.org/abs/1904.00962☆377Updated 5 years ago
- ☆365Updated last year
- Code for the ALiBi method for transformer language models (ICLR 2022)☆547Updated 2 years ago
- Implementation of RETRO, Deepmind's Retrieval based Attention net, in Pytorch☆876Updated 2 years ago
- Named tensors with first-class dimensions for PyTorch☆332Updated 2 years ago
- Profiling and inspecting memory in pytorch☆1,076Updated 3 months ago
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.☆1,007Updated last year