aredden / torch-bnb-fp4Links
Faster Pytorch bitsandbytes 4bit fp4 nn.Linear ops
☆30Updated last year
Alternatives and similar repositories for torch-bnb-fp4
Users that are interested in torch-bnb-fp4 are comparing it to the libraries listed below
Sorting:
- ☆159Updated 2 years ago
- This repository contains the experimental PyTorch native float8 training UX☆227Updated last year
- ☆121Updated last year
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆46Updated last year
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated last year
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆73Updated last year
- A block oriented training approach for inference time optimization.☆33Updated last year
- Experiment of using Tangent to autodiff triton☆81Updated last year
- Boosting 4-bit inference kernels with 2:4 Sparsity☆86Updated last year
- QuIP quantization☆61Updated last year
- ☆155Updated 10 months ago
- A library for unit scaling in PyTorch☆132Updated 5 months ago
- research impl of Native Sparse Attention (2502.11089)☆63Updated 9 months ago
- ☆115Updated last year
- The evaluation framework for training-free sparse attention in LLMs☆106Updated 2 months ago
- ☆30Updated last year
- FlexAttention w/ FlashAttention3 Support☆27Updated last year
- ☆150Updated 2 years ago
- Cold Compress is a hackable, lightweight, and open-source toolkit for creating and benchmarking cache compression methods built on top of…☆146Updated last year
- ☆159Updated 5 months ago
- ACL 2023☆39Updated 2 years ago
- Official PyTorch implementation of "GuidedQuant: Large Language Model Quantization via Exploiting End Loss Guidance" (ICML 2025)☆48Updated 5 months ago
- ☆83Updated 10 months ago
- PB-LLM: Partially Binarized Large Language Models☆157Updated 2 years ago
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆222Updated 6 months ago
- Advanced Ultra-Low Bitrate Compression Techniques for the LLaMA Family of LLMs☆110Updated last year
- Odysseus: Playground of LLM Sequence Parallelism☆78Updated last year
- Implementation of IceFormer: Accelerated Inference with Long-Sequence Transformers on CPUs (ICLR 2024).☆25Updated 5 months ago
- [ACL 2024] RelayAttention for Efficient Large Language Model Serving with Long System Prompts☆40Updated last year
- ☆157Updated 2 years ago