☆437Sep 18, 2025Updated 5 months ago
Alternatives and similar repositories for xFasterTransformer
Users that are interested in xFasterTransformer are comparing it to the libraries listed below
Sorting:
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,175Oct 8, 2024Updated last year
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆273Aug 6, 2025Updated 6 months ago
- ☆13Jan 7, 2025Updated last year
- A throughput-oriented high-performance serving framework for LLMs☆946Oct 29, 2025Updated 4 months ago
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆1,051Updated this week
- A high-performance inference system for large language models, designed for production environments.☆492Dec 19, 2025Updated 2 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆1,018Sep 4, 2024Updated last year
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆96Feb 20, 2026Updated last week
- FlashInfer: Kernel Library for LLM Serving☆5,009Updated this week
- ☆71Mar 26, 2025Updated 11 months ago
- Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).☆250Mar 15, 2024Updated last year
- Easy and Efficient Quantization for Transformers☆205Jan 28, 2026Updated last month
- An innovative library for efficient LLM inference via low-bit quantization☆352Aug 30, 2024Updated last year
- ☆97Mar 26, 2025Updated 11 months ago
- ☆65Apr 26, 2025Updated 10 months ago
- oneCCL Bindings for Pytorch* (deprecated)☆105Dec 31, 2025Updated 2 months ago
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆4,795Updated this week
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,261Aug 28, 2025Updated 6 months ago
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆816Mar 6, 2025Updated 11 months ago
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,618Updated this week
- fastllm是后端无依赖的高性能大模型推理库。同时支持张量并行推理稠密模型和混合模式推理MOE模型,任意10G以上显卡即可推理满血DeepSeek。双路9004/9005服务器+单显卡部署DeepSeek满血满精度原版模型,单并发20tps;INT4量化模型单并发30tp…☆4,154Feb 14, 2026Updated 2 weeks ago
- SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on PyTorch, TensorFlow, …☆2,585Feb 20, 2026Updated last week
- Materials for learning SGLang☆753Jan 5, 2026Updated last month
- A general 2-8 bits quantization toolbox with GPTQ/AWQ/HQQ/VPTQ, and export to onnx/onnx-runtime easily.☆184Apr 2, 2025Updated 10 months ago
- A Python package for extending the official PyTorch that can easily obtain performance on Intel platform☆2,012Feb 13, 2026Updated 2 weeks ago
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆3,901Feb 20, 2026Updated last week
- [MLSys'24] Atom: Low-bit Quantization for Efficient and Accurate LLM Serving☆336Jul 2, 2024Updated last year
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,095Jun 30, 2025Updated 8 months ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,188Sep 30, 2025Updated 5 months ago
- Transformer related optimization, including BERT, GPT☆6,394Mar 27, 2024Updated last year
- Nsight Compute In Docker☆13Dec 21, 2023Updated 2 years ago
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆12,938Updated this week
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆46Jun 11, 2025Updated 8 months ago
- Dynamic Memory Management for Serving LLMs without PagedAttention☆463May 30, 2025Updated 9 months ago
- BitBLAS is a library to support mixed-precision matrix multiplications, especially for quantized LLM deployment.☆751Aug 6, 2025Updated 6 months ago
- Disaggregated serving system for Large Language Models (LLMs).☆777Apr 6, 2025Updated 10 months ago
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,708Jun 25, 2024Updated last year
- Efficient AI Inference & Serving☆479Jan 8, 2024Updated 2 years ago
- Standalone Flash Attention v2 kernel without libtorch dependency☆114Sep 10, 2024Updated last year