dottxt-ai / outlines-coreLinks
Faster structured generation
☆224Updated last month
Alternatives and similar repositories for outlines-core
Users that are interested in outlines-core are comparing it to the libraries listed below
Sorting:
- Super-fast Structured Outputs☆305Updated this week
- A high-performance constrained decoding engine based on context free grammar in Rust☆53Updated last month
- The Batched API provides a flexible and efficient way to process multiple requests in a batch, with a primary focus on dynamic batching o…☆137Updated last month
- ☆130Updated last year
- ☆152Updated 6 months ago
- This repository has code for fine-tuning LLMs with GRPO specifically for Rust Programming using cargo as feedback☆95Updated 3 months ago
- High-performance MinHash implementation in Rust with Python bindings for efficient similarity estimation and deduplication of large datas…☆177Updated this week
- ☆29Updated last year
- Efficent platform for inference and serving local LLMs including an OpenAI compatible API server.☆380Updated this week
- High-Performance Engine for Multi-Vector Search☆106Updated 2 weeks ago
- Inference engine for GLiNER models, in Rust☆59Updated 2 months ago
- A Lightweight Library for AI Observability☆245Updated 4 months ago
- TensorRT-LLM server with Structured Outputs (JSON) built with Rust☆55Updated last month
- Train your own SOTA deductive reasoning model☆94Updated 3 months ago
- ☆124Updated 2 months ago
- Baguetter is a flexible, efficient, and hackable search engine library implemented in Python. It's designed for quickly benchmarking, imp…☆181Updated 9 months ago
- Simple UI for debugging correlations of text embeddings☆276Updated 3 weeks ago
- experiments with inference on llama☆104Updated last year
- Scale your LLM-as-a-judge.☆240Updated 2 weeks ago
- Late Interaction Models Training & Retrieval☆444Updated 2 weeks ago
- ☆182Updated 2 months ago
- Fast parallel LLM inference for MLX☆193Updated 11 months ago
- ☆211Updated 11 months ago
- ☆72Updated 7 months ago
- Formatron empowers everyone to control the format of language models' output with minimal overhead.☆206Updated 2 weeks ago
- Manage scalable open LLM inference endpoints in Slurm clusters☆260Updated 11 months ago
- ☆195Updated last year
- ☆199Updated last year
- ☆137Updated last year
- Synthetic Data for LLM Fine-Tuning☆120Updated last year