☆209May 5, 2025Updated 11 months ago
Alternatives and similar repositories for AutoFP8
Users that are interested in AutoFP8 are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆2,960Mar 31, 2026Updated last week
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,048Sep 4, 2024Updated last year
- Fast low-bit matmul kernels in Triton☆443Updated this week
- Boosting 4-bit inference kernels with 2:4 Sparsity☆94Sep 4, 2024Updated last year
- study of cutlass☆22Nov 10, 2024Updated last year
- Wordpress hosting with auto-scaling on Cloudways • AdFully Managed hosting built for WordPress-powered businesses that need reliable, auto-scalable hosting. Cloudways SafeUpdates now available.
- FlashInfer: Kernel Library for LLM Serving☆5,273Updated this week
- A safetensors extension to efficiently store sparse quantized tensors on disk☆268Updated this week
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆154Aug 21, 2025Updated 7 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆266Dec 4, 2025Updated 4 months ago
- A throughput-oriented high-performance serving framework for LLMs☆953Mar 29, 2026Updated last week
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆82Aug 12, 2024Updated last year
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆278Jul 16, 2025Updated 8 months ago
- Benchmark tests supporting the TiledCUDA library.☆18Nov 19, 2024Updated last year
- PyTorch native quantization and sparsity for training and inference☆2,756Updated this week
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- ☆33Feb 3, 2025Updated last year
- Easy and Efficient Quantization for Transformers☆207Mar 25, 2026Updated 2 weeks ago
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,256Updated this week
- ☆98Mar 26, 2025Updated last year
- Dynamic Memory Management for Serving LLMs without PagedAttention☆470May 30, 2025Updated 10 months ago
- Tritonbench is a collection of PyTorch custom operators with example inputs to measure their performance.☆338Updated this week
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆822Mar 6, 2025Updated last year
- Microsoft Automatic Mixed Precision Library☆635Dec 1, 2025Updated 4 months ago
- A unified library of SOTA model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresse…☆2,358Updated this week
- End-to-end encrypted email - Proton Mail • AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,322May 11, 2025Updated 10 months ago
- Applied AI experiments and examples for PyTorch☆320Aug 22, 2025Updated 7 months ago
- TileFusion is an experimental C++ macro kernel template library that elevates the abstraction level in CUDA C for tile processing.☆106Jun 28, 2025Updated 9 months ago
- Quantized Attention on GPU☆44Nov 22, 2024Updated last year
- Build compute kernels and load them from the Hub.☆552Updated this week
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆758Aug 6, 2025Updated 8 months ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,631Jul 12, 2024Updated last year
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,202Mar 9, 2026Updated last month
- Open deep learning compiler stack for cpu, gpu and specialized accelerators☆19Apr 1, 2026Updated last week
- Simple, predictable pricing with DigitalOcean hosting • AdAlways know what you'll pay with monthly caps and flat pricing. Enterprise-grade infrastructure trusted by 600k+ customers.
- [ICLR2025] Breaking Throughput-Latency Trade-off for Long Sequences with Speculative Decoding☆145Dec 4, 2024Updated last year
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆3,995Updated this week
- vLLM performance dashboard☆44Apr 26, 2024Updated last year
- ☆171Mar 9, 2023Updated 3 years ago
- Code for Neurips24 paper: QuaRot, an end-to-end 4-bit inference of large language models.☆498Nov 26, 2024Updated last year
- This repository contains the experimental PyTorch native float8 training UX☆226Aug 1, 2024Updated last year
- ☆105Sep 9, 2024Updated last year