☆207May 5, 2025Updated 10 months ago
Alternatives and similar repositories for AutoFP8
Users that are interested in AutoFP8 are comparing it to the libraries listed below
Sorting:
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆2,891Updated this week
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,041Sep 4, 2024Updated last year
- Fast low-bit matmul kernels in Triton☆438Feb 1, 2026Updated last month
- Boosting 4-bit inference kernels with 2:4 Sparsity☆94Sep 4, 2024Updated last year
- study of cutlass☆22Nov 10, 2024Updated last year
- FlashInfer: Kernel Library for LLM Serving☆5,145Updated this week
- A safetensors extension to efficiently store sparse quantized tensors on disk☆263Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆267Dec 4, 2025Updated 3 months ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆155Aug 21, 2025Updated 6 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆80Aug 12, 2024Updated last year
- A throughput-oriented high-performance serving framework for LLMs☆949Oct 29, 2025Updated 4 months ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆276Jul 16, 2025Updated 8 months ago
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- PyTorch native quantization and sparsity for training and inference☆2,730Updated this week
- ☆33Feb 3, 2025Updated last year
- Easy and Efficient Quantization for Transformers☆205Jan 28, 2026Updated last month
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,211Updated this week
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆818Mar 6, 2025Updated last year
- ☆97Mar 26, 2025Updated 11 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆466May 30, 2025Updated 9 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆330Updated this week
- Microsoft Automatic Mixed Precision Library☆634Dec 1, 2025Updated 3 months ago
- A unified library of SOTA model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresse…☆2,156Updated this week
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,317May 11, 2025Updated 10 months ago
- Applied AI experiments and examples for PyTorch☆319Aug 22, 2025Updated 6 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆106Jun 28, 2025Updated 8 months ago
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- Build compute kernels and load them from the Hub.☆494Mar 13, 2026Updated last week
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,196Mar 9, 2026Updated last week
- ☆168Mar 9, 2023Updated 3 years ago
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆752Aug 6, 2025Updated 7 months ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,621Jul 12, 2024Updated last year
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆19Mar 12, 2026Updated last week
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆144Dec 4, 2024Updated last year
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆3,945Mar 13, 2026Updated last week
- vLLM performance dashboard☆43Apr 26, 2024Updated last year
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.☆489Nov 26, 2024Updated last year
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3 (NeurIPS'25).☆2,220Feb 20, 2026Updated last month
- This repository contains the experimental PyTorch native float8 training UX☆226Aug 1, 2024Updated last year