mlc-ai / xgrammarLinks
Fast, Flexible and Portable Structured Generation
☆1,396Updated last week
Alternatives and similar repositories for xgrammar
Users that are interested in xgrammar are comparing it to the libraries listed below
Sorting:
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,163Updated 2 months ago
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆2,296Updated this week
- A throughput-oriented high-performance serving framework for LLMs☆918Updated last month
- slime is an LLM post-training framework for RL Scaling.☆2,612Updated last week
- FlashInfer: Kernel Library for LLM Serving☆4,168Updated this week
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆2,162Updated this week
- LLMPerf is a library for validating and benchmarking LLMs☆1,057Updated 11 months ago
- LLM model quantization (compression) toolkit with hw acceleration support for Nvidia CUDA, AMD ROCm, Intel XPU and Intel/AMD/Apple CPU vi…☆913Updated this week
- Muon is Scalable for LLM Training☆1,372Updated 4 months ago
- Minimalistic large language model 3D-parallelism training☆2,351Updated 2 weeks ago
- Serving multiple LoRA finetuned LLM as one☆1,121Updated last year
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,307Updated 8 months ago
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆958Updated last year
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,284Updated 6 months ago
- OLMoE: Open Mixture-of-Experts Language Models☆919Updated 2 months ago
- Materials for learning SGLang☆658Updated 2 weeks ago
- SkyRL: A Modular Full-stack RL Library for LLMs☆1,287Updated last week
- Scalable toolkit for efficient model alignment☆847Updated last month
- Scalable toolkit for efficient model reinforcement☆1,054Updated this week
- MoBA: Mixture of Block Attention for Long-Context LLMs☆2,007Updated 8 months ago
- ☆917Updated last month
- Mirage Persistent Kernel: Compiling LLMs into a MegaKernel☆1,973Updated last week
- Recipes to scale inference-time compute of open models☆1,118Updated 6 months ago
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,874Updated last year
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,080Updated 5 months ago
- Ring attention implementation with flash attention☆923Updated 2 months ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆750Updated last year
- Evaluate and Enhance Your LLM Deployments for Real-World Inference Needs☆730Updated this week
- Train speculative decoding models effortlessly and port them smoothly to SGLang serving.☆523Updated this week
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆793Updated 8 months ago