HanGuo97 / flute
Fast Matrix Multiplications for Lookup Table-Quantized LLMs
☆358Updated last week
Alternatives and similar repositories for flute:
Users that are interested in flute are comparing it to the libraries listed below
- ☆126Updated last month
- Fast low-bit matmul kernels in Triton☆291Updated this week
- ☆55Updated 5 months ago
- [NeurIPS 2024] KVQuant: Towards 10 Million Context Length LLM Inference with KV Cache Quantization☆340Updated 8 months ago
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.☆379Updated 5 months ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆248Updated 5 months ago
- [ICLR'25] Fast Inference of MoE Models with CPU-GPU Orchestration☆208Updated 5 months ago
- Fast Hadamard transform in CUDA, with a PyTorch interface☆174Updated 11 months ago
- LLM KV cache compression made easy☆463Updated last week
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆305Updated 9 months ago
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆198Updated 9 months ago
- Efficient LLM Inference over Long Sequences☆370Updated this week
- A safetensors extension to efficiently store sparse quantized tensors on disk☆100Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆262Updated 6 months ago
- Applied AI experiments and examples for PyTorch☆262Updated last month
- [ICML 2024] KIVI: A Tuning-Free Asymmetric 2bit Quantization for KV Cache☆288Updated 3 months ago
- Repository for the QUIK project, enabling the use of 4bit kernels for generative inference - EMNLP 2024☆179Updated last year
- ☆122Updated 2 months ago
- Boosting 4-bit inference kernels with 2:4 Sparsity☆72Updated 7 months ago
- [ICML 2024] BiLLM: Pushing the Limit of Post-Training Quantization for LLMs☆214Updated 3 months ago
- EfficientQAT: Efficient Quantization-Aware Training for Large Language Models☆263Updated 6 months ago
- Triton-based implementation of Sparse Mixture of Experts.☆210Updated 4 months ago
- KV cache compression for high-throughput LLM inference☆126Updated 2 months ago
- GEAR: An Efficient KV Cache Compression Recipefor Near-Lossless Generative Inference of LLM☆159Updated 9 months ago
- This repository contains the experimental PyTorch native float8 training UX☆223Updated 8 months ago
- Perplexity GPU Kernels☆251Updated this week
- VPTQ, A Flexible and Extreme low-bit quantization algorithm☆628Updated 3 weeks ago
- Code for paper: "QuIP: 2-Bit Quantization of Large Language Models With Guarantees"☆363Updated last year
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆643Updated last month
- ☆208Updated 3 months ago