aredden / torch-bnb-fp4Links
Faster Pytorch bitsandbytes 4bit fp4 nn.Linear ops
☆30Updated last year
Alternatives and similar repositories for torch-bnb-fp4
Users that are interested in torch-bnb-fp4 are comparing it to the libraries listed below
Sorting:
- Demo of the unit_scaling library, showing how a model can be easily adapted to train in FP8.☆46Updated last year
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆42Updated last year
- ☆157Updated last year
- A library for unit scaling in PyTorch☆125Updated this week
- ☆112Updated last year
- Flash-Muon: An Efficient Implementation of Muon Optimizer☆142Updated last month
- This repository contains the experimental PyTorch native float8 training UX☆224Updated 11 months ago
- research impl of Native Sparse Attention (2502.11089)☆54Updated 4 months ago
- Experiment of using Tangent to autodiff triton☆79Updated last year
- Triton kernels for Flux☆20Updated last week
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- Fast, Modern, and Low Precision PyTorch Optimizers☆98Updated this week
- Odysseus: Playground of LLM Sequence Parallelism☆70Updated last year
- A bunch of kernels that might make stuff slower 😉☆55Updated this week
- CUDA and Triton implementations of Flash Attention with SoftmaxN.☆70Updated last year
- QuIP quantization☆54Updated last year
- A place to store reusable transformer components of my own creation or found on the interwebs☆56Updated 2 weeks ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆80Updated 10 months ago
- ☆51Updated last year
- Low-bit optimizers for PyTorch☆129Updated last year
- ☆28Updated 11 months ago
- DPO, but faster 🚀☆43Updated 7 months ago
- ☆77Updated 5 months ago
- A safetensors extension to efficiently store sparse quantized tensors on disk☆135Updated this week
- Load compute kernels from the Hub☆207Updated this week
- Code for "RSQ: Learning from Important Tokens Leads to Better Quantized LLMs"☆18Updated last month
- The evaluation framework for training-free sparse attention in LLMs☆83Updated 3 weeks ago
- Repository for CPU Kernel Generation for LLM Inference☆26Updated 2 years ago
- 🎬 3.7× faster video generation E2E 🖼️ 1.6× faster image generation E2E ⚡ ColumnSparseAttn 9.3× vs FlashAttn‑3 💨 ColumnSparseGEMM 2.5× …☆74Updated 3 weeks ago
- 32 times longer context window than vanilla Transformers and up to 4 times longer than memory efficient Transformers.☆48Updated 2 years ago