facebookresearch / diffq
DiffQ performs differentiable quantization using pseudo quantization noise. It can automatically tune the number of bits used per weight or group of weights, in order to achieve a given trade-off between model size and accuracy.
☆235Updated last year
Alternatives and similar repositories for diffq:
Users that are interested in diffq are comparing it to the libraries listed below
- Memory Efficient Attention (O(sqrt(n)) for Jax and PyTorch☆182Updated 2 years ago
- Implementation of a Transformer, but completely in Triton☆261Updated 2 years ago
- Implementation of fused cosine similarity attention in the same style as Flash Attention☆212Updated 2 years ago
- Named tensors with first-class dimensions for PyTorch☆321Updated last year
- A library to inspect and extract intermediate layers of PyTorch models.☆472Updated 2 years ago
- Simple and efficient RevNet-Library for PyTorch with XLA and DeepSpeed support and parameter offload☆127Updated 2 years ago
- Official code for "Distributed Deep Learning in Open Collaborations" (NeurIPS 2021)☆116Updated 3 years ago
- Fast Block Sparse Matrices for Pytorch☆546Updated 4 years ago
- Implementation of the Adan (ADAptive Nesterov momentum algorithm) Optimizer in Pytorch☆251Updated 2 years ago
- Accelerate PyTorch models with ONNX Runtime☆358Updated last month
- Implementation of Flash Attention in Jax☆206Updated last year
- ☆185Updated this week
- Implementation of the specific Transformer architecture from PaLM - Scaling Language Modeling with Pathways - in Jax (Equinox framework)☆187Updated 2 years ago
- Amos optimizer with JEstimator lib.☆82Updated 10 months ago
- Implementation of Mega, the Single-head Attention with Multi-headed EMA architecture that currently holds SOTA on Long Range Arena☆204Updated last year
- Customized matrix multiplication kernels☆54Updated 3 years ago
- A library that contains a rich collection of performant PyTorch model metrics, a simple interface to create new metrics, a toolkit to fac…☆229Updated 2 months ago
- Blazing fast training of 🤗 Transformers on Graphcore IPUs☆84Updated last year
- Prune a model while finetuning or training.☆402Updated 2 years ago
- torch::deploy (multipy for non-torch uses) is a system that lets you get around the GIL problem by running multiple Python interpreters i…☆178Updated 3 months ago
- ☆164Updated 2 years ago
- Library for 8-bit optimizers and quantization routines.☆717Updated 2 years ago
- End-to-end training of sparse deep neural networks with little-to-no performance loss.☆320Updated 2 years ago
- [Prototype] Tools for the concurrent manipulation of variably sized Tensors.☆252Updated 2 years ago
- Butterfly matrix multiplication in PyTorch☆168Updated last year
- This repository contains the experimental PyTorch native float8 training UX☆222Updated 8 months ago
- Torch Distributed Experimental☆115Updated 7 months ago
- MONeT framework for reducing memory consumption of DNN training☆173Updated 3 years ago
- Configuration classes enabling type-safe PyTorch configuration for Hydra apps☆212Updated 2 years ago
- Contrastive Language-Image Pretraining☆142Updated 2 years ago