mlc-ai / xgrammar
Fast, Flexible and Portable Structured Generation
☆922Updated this week
Alternatives and similar repositories for xgrammar:
Users that are interested in xgrammar are comparing it to the libraries listed below
- A throughput-oriented high-performance serving framework for LLMs☆805Updated last week
- FlashInfer: Kernel Library for LLM Serving☆2,815Updated this week
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,013Updated last week
- Official Implementation of EAGLE-1 (ICML'24), EAGLE-2 (EMNLP'24), and EAGLE-3.☆1,220Updated this week
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆1,316Updated this week
- Redis for LLMs☆951Updated this week
- Minimalistic large language model 3D-parallelism training☆1,850Updated this week
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,516Updated this week
- Muon is Scalable for LLM Training☆1,043Updated last month
- Serving multiple LoRA finetuned LLM as one☆1,058Updated last year
- [MLSys'25] QServe: W4A8KV4 Quantization and System Co-design for Efficient LLM Serving; [MLSys'25] LServe: Efficient Long-sequence LLM Se…☆663Updated 2 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆818Updated 8 months ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆723Updated 7 months ago
- MoBA: Mixture of Block Attention for Long-Context LLMs☆1,771Updated last month
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆915Updated 3 weeks ago
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,246Updated 2 months ago
- Official Repo for Open-Reasoner-Zero☆1,912Updated last month
- Materials for learning SGLang☆406Updated 2 weeks ago
- Recipes to scale inference-time compute of open models☆1,068Updated this week
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,145Updated last week
- 🐳 Efficient Triton implementations for "Native Sparse Attention: Hardware-Aligned and Natively Trainable Sparse Attention"☆653Updated last month
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆697Updated last month
- VPTQ, A Flexible and Extreme low-bit quantization algorithm☆632Updated 2 weeks ago
- [ICLR 2025] DuoAttention: Efficient Long-Context LLM Inference with Retrieval and Streaming Heads☆457Updated 3 months ago
- OLMoE: Open Mixture-of-Experts Language Models☆739Updated last month
- LLMPerf is a library for validating and benchmarking LLMs☆900Updated 5 months ago
- Ring attention implementation with flash attention☆759Updated last month
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels☆1,108Updated this week
- Scalable toolkit for efficient model alignment☆786Updated last week
- Production ready LLM model compression/quantization toolkit with hw accelerated inference support for both cpu/gpu via HF, vLLM, and SGLa…☆537Updated this week