otriscon / llm-structured-outputLinks
☆90Updated 10 months ago
Alternatives and similar repositories for llm-structured-output
Users that are interested in llm-structured-output are comparing it to the libraries listed below
Sorting:
- Implementation of nougat that focuses on processing pdf locally.☆83Updated 10 months ago
- Client Code Examples, Use Cases and Benchmarks for Enterprise h2oGPTe RAG-Based GenAI Platform☆91Updated 2 months ago
- Easy to use, High Performant Knowledge Distillation for LLMs☆97Updated 7 months ago
- GenAI & agent toolkit for Apple Silicon Mac, implementing JSON schema-steered structured output (3SO) and tool-calling in Python. For mor…☆129Updated 3 weeks ago
- Simple examples using Argilla tools to build AI☆56Updated last year
- ☆164Updated 3 months ago
- Official homepage for "Self-Harmonized Chain of Thought" (NAACL 2025)☆91Updated 10 months ago
- an implementation of Self-Extend, to expand the context window via grouped attention☆119Updated last year
- Google Deepmind's PromptBreeder for automated prompt engineering implemented in langchain expression language.☆159Updated last year
- Synthetic Data for LLM Fine-Tuning☆119Updated 2 years ago
- A DSPy-based implementation of the tree of thoughts method (Yao et al., 2023) for generating persuasive arguments☆93Updated 2 months ago
- 🚀 Scale your RAG pipeline using Ragswift: A scalable centralized embeddings management platform☆38Updated last year
- Scripts to create your own moe models using mlx☆90Updated last year
- One Line To Build Zero-Data Classifiers in Minutes☆62Updated last year
- Experimental Code for StructuredRAG: JSON Response Formatting with Large Language Models☆115Updated 7 months ago
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆222Updated last year
- GPT-4 Level Conversational QA Trained In a Few Hours☆66Updated last year
- For inferring and serving local LLMs using the MLX framework☆109Updated last year
- ☆40Updated 11 months ago
- A python package for serving LLM on OpenAI-compatible API endpoints with prompt caching using MLX.☆99Updated 5 months ago
- Benchmark various LLM Structured Output frameworks: Instructor, Mirascope, Langchain, LlamaIndex, Fructose, Marvin, Outlines, etc on task…☆179Updated last year
- Function Calling Benchmark & Testing☆92Updated last year
- A framework for evaluating function calls made by LLMs☆40Updated last year
- ☆117Updated 11 months ago
- Fast parallel LLM inference for MLX☆234Updated last year
- Distributed Inference for mlx LLm☆99Updated last year
- Mixing Language Models with Self-Verification and Meta-Verification☆110Updated 11 months ago
- Code for evaluating with Flow-Judge-v0.1 - an open-source, lightweight (3.8B) language model optimized for LLM system evaluations. Crafte…☆78Updated last year
- An easy-to-understand framework for LLM samplers that rewind and revise generated tokens☆146Updated 9 months ago
- LLM reads a paper and produce a working prototype☆60Updated 7 months ago