Anonymous1252022 / fp4-all-the-wayLinks
☆38Updated 6 months ago
Alternatives and similar repositories for fp4-all-the-way
Users that are interested in fp4-all-the-way are comparing it to the libraries listed below
Sorting:
- Work in progress.☆75Updated last week
- ☆111Updated 2 weeks ago
- ☆158Updated 5 months ago
- ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization☆111Updated last year
- ☆51Updated 6 months ago
- The evaluation framework for training-free sparse attention in LLMs☆106Updated last month
- Official implementation for Training LLMs with MXFP4☆110Updated 7 months ago
- ☆154Updated 9 months ago
- An extention to the GaLore paper, to perform Natural Gradient Descent in low rank subspace☆18Updated last year
- ☆132Updated 6 months ago
- LLM Inference with Microscaling Format☆33Updated last year
- ☆64Updated 5 months ago
- QuIP quantization☆61Updated last year
- Activation-aware Singular Value Decomposition for Compressing Large Language Models☆80Updated last year
- HALO: Hadamard-Assisted Low-Precision Optimization and Training method for finetuning LLMs. 🚀 The official implementation of https://arx…☆29Updated 9 months ago
- An efficient implementation of the NSA (Native Sparse Attention) kernel☆126Updated 5 months ago
- [ICML 2025] SliM-LLM: Salience-Driven Mixed-Precision Quantization for Large Language Models☆46Updated last year
- Vortex: A Flexible and Efficient Sparse Attention Framework☆33Updated last week
- Training-free Post-training Efficient Sub-quadratic Complexity Attention. Implemented with OpenAI Triton.☆148Updated last month
- An algorithm for weight-activation quantization (W4A4, W4A8) of LLMs, supporting both static and dynamic quantization☆168Updated last week
- PB-LLM: Partially Binarized Large Language Models☆157Updated 2 years ago
- [COLM 2025] Official PyTorch implementation of "Quantization Hurts Reasoning? An Empirical Study on Quantized Reasoning Models"☆61Updated 4 months ago
- ☆83Updated 10 months ago
- FBI-LLM: Scaling Up Fully Binarized LLMs from Scratch via Autoregressive Distillation☆51Updated 3 months ago
- ☆71Updated 4 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆86Updated last year
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆170Updated last year
- ☆31Updated last year
- ☆49Updated last year
- [ICML 2024 Oral] Any-Precision LLM: Low-Cost Deployment of Multiple, Different-Sized LLMs☆121Updated 5 months ago