aredden / torch-bnb-fp4Links
Faster Pytorch bitsandbytes 4bit fp4 nn.Linear ops
☆30Updated last year
Alternatives and similar repositories for torch-bnb-fp4
Users that are interested in torch-bnb-fp4 are comparing it to the libraries listed below
Sorting:
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated last year
- ☆159Updated 2 years ago
- This repository contains the experimental PyTorch native float8 training UX☆224Updated last year
- ☆118Updated last year
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆46Updated last year
- Boosting 4-bit inference kernels with 2:4 Sparsity☆82Updated last year
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆185Updated 3 months ago
- Official implementation for Training LLMs with MXFP4☆91Updated 4 months ago
- Official PyTorch implementation of "GuidedQuant: Large Language Model Quantization via Exploiting End Loss Guidance" (ICML 2025)☆43Updated 2 months ago
- ☆142Updated 7 months ago
- ☆94Updated 3 weeks ago
- Odysseus: Playground of LLM Sequence Parallelism☆77Updated last year
- QuIP quantization☆59Updated last year
- A library for unit scaling in PyTorch☆130Updated 2 months ago
- This repository contains code for the MicroAdam paper.☆19Updated 9 months ago
- A safetensors extension to efficiently store sparse quantized tensors on disk☆161Updated this week
- A block oriented training approach for inference time optimization.☆34Updated last year
- PB-LLM: Partially Binarized Large Language Models☆153Updated last year
- Low-bit optimizers for PyTorch☆131Updated last year
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73Updated last year
- Experiment of using Tangent to autodiff triton☆81Updated last year
- ☆111Updated last year
- A fusion of a linear layer and a cross entropy loss, written for pytorch in triton.☆70Updated last year
- ☆150Updated 3 months ago
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees" adapted for Llama models☆40Updated 2 years ago
- research impl of Native Sparse Attention (2502.11089)☆61Updated 7 months ago
- Explore training for quantized models☆24Updated 2 months ago
- The evaluation framework for training-free sparse attention in LLMs☆93Updated 3 months ago
- ☆29Updated last year
- ☆88Updated last year