Enforce the output format (JSON Schema, Regex etc) of a language model
☆1,992Aug 24, 2025Updated 6 months ago
Alternatives and similar repositories for lm-format-enforcer
Users that are interested in lm-format-enforcer are comparing it to the libraries listed below
Sorting:
- Structured Outputs☆13,488Mar 2, 2026Updated last week
- A guidance language for controlling large language models.☆21,333Feb 13, 2026Updated 3 weeks ago
- A Bulletproof Way to Generate Structured JSON from Language Models☆4,905Feb 24, 2024Updated 2 years ago
- structured outputs for llms☆12,468Feb 25, 2026Updated last week
- SGLang is a high-performance serving framework for large language models and multimodal models.☆24,216Updated this week
- A language for constraint-guided and efficient LLM programming.☆4,155May 22, 2025Updated 9 months ago
- Chat language model that can use tools and interpret the results☆1,592Dec 3, 2025Updated 3 months ago
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,451Updated this week
- DSPy: The framework for programming—not prompting—language models☆32,519Updated this week
- Tools for merging pretrained large language models.☆6,842Feb 28, 2026Updated last week
- Fast, Flexible and Portable Structured Generation☆1,567Updated this week
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,645Updated this week
- Go ahead and axolotl questions☆11,395Updated this week
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆3,732May 21, 2025Updated 9 months ago
- Large Language Model Text Generation Inference☆10,795Jan 8, 2026Updated 2 months ago
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆3,114Mar 2, 2026Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆71,883Updated this week
- Infinity is a high-throughput, low-latency serving engine for text-embeddings, reranking models, clip, clap and colpali☆2,703Feb 5, 2026Updated last month
- Easily use and train state of the art late-interaction retrieval methods (ColBERT) in any RAG pipeline. Designed for modularity and ease-…☆3,868May 17, 2025Updated 9 months ago
- Formatron empowers everyone to control the format of language models' output with minimal overhead.☆234Jun 7, 2025Updated 9 months ago
- [EMNLP'23, ACL'24] To speed up LLMs' inference and enhance LLM's perceive of key information, compress the prompt and KV-Cache, which ach…☆5,892Oct 28, 2025Updated 4 months ago
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆5,028Apr 11, 2025Updated 10 months ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,913Sep 30, 2023Updated 2 years ago
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,714Jun 25, 2024Updated last year
- Train transformer language models with reinforcement learning.☆17,523Updated this week
- A blazing fast inference solution for text embeddings models☆4,553Feb 25, 2026Updated last week
- A framework for few-shot evaluation of language models.☆11,618Updated this week
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,319May 11, 2025Updated 9 months ago
- Robust recipes to align language models with human and AI preferences☆5,510Sep 8, 2025Updated 6 months ago
- Adding guardrails to large language models.☆6,492Updated this week
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,915Updated this week
- Supercharge Your LLM Application Evaluations 🚀☆12,826Feb 24, 2026Updated last week
- Optimizing inference proxy for LLMs☆3,352Jan 28, 2026Updated last month
- Accessible large language models via k-bit quantization for PyTorch.☆8,019Updated this week
- Fine-tuning & Reinforcement Learning for LLMs. 🦥 Train OpenAI gpt-oss, DeepSeek, Qwen, Llama, Gemma, TTS 2x faster with 70% less VRAM.☆53,029Updated this week
- Python bindings for llama.cpp☆10,020Aug 15, 2025Updated 6 months ago
- S-LoRA: Serving Thousands of Concurrent LoRA Adapters☆1,900Jan 21, 2024Updated 2 years ago
- Serving multiple LoRA finetuned LLM as one☆1,144May 8, 2024Updated last year
- Large-scale LLM inference engine☆1,666Updated this week