intentee / llmops-handbookLinks
Practical and advanced guide to LLMOps. It provides a solid understanding of large language models’ general concepts, deployment techniques, and software engineering practices. (work in progress)
☆75Updated last year
Alternatives and similar repositories for llmops-handbook
Users that are interested in llmops-handbook are comparing it to the libraries listed below
Sorting:
- High level library for batched embeddings generation, blazingly-fast web-based RAG and quantized indexes processing ⚡☆67Updated 11 months ago
- Code for evaluating with Flow-Judge-v0.1 - an open-source, lightweight (3.8B) language model optimized for LLM system evaluations. Crafte…☆78Updated 11 months ago
- ☆133Updated 5 months ago
- Dataset Crafting w/ RAG/Wikipedia ground truth and Efficient Fine-Tuning Using MLX and Unsloth. Includes configurable dataset annotation …☆185Updated last year
- A Lightweight Library for AI Observability☆251Updated 7 months ago
- A simple experiment on letting two local LLM have a conversation about anything!☆111Updated last year
- Serving LLMs in the HF-Transformers format via a PyFlask API☆71Updated last year
- Simple examples using Argilla tools to build AI☆55Updated 10 months ago
- chrome & firefox extension to chat with webpages: local llms☆126Updated 9 months ago
- Rank LLMs, RAG systems, and prompts using automated head-to-head evaluation☆105Updated 9 months ago
- A fast batching API to serve LLM models☆187Updated last year
- Auto Data is a library designed for quick and effortless creation of datasets tailored for fine-tuning Large Language Models (LLMs).☆102Updated 11 months ago
- This small API downloads and exposes access to NeuML's txtai-wikipedia and full wikipedia datasets, taking in a query and returning full …☆100Updated last month
- Function Calling Benchmark & Testing☆90Updated last year
- Embed anything.☆28Updated last year
- A python package for developing AI applications with local LLMs.☆150Updated 9 months ago
- ☆104Updated 3 months ago
- A stable, fast and easy-to-use inference library with a focus on a sync-to-async API☆45Updated last year
- function calling-based LLM agents☆287Updated last year
- Python package wrapping llama.cpp for on-device LLM inference☆90Updated 2 months ago
- a lightweight, open-source blueprint for building powerful and scalable LLM chat applications☆28Updated last year
- Client-side toolkit for using large language models, including where self-hosted☆113Updated 10 months ago
- ☆67Updated last year
- Synthetic Data for LLM Fine-Tuning☆120Updated last year
- ☆101Updated last month
- Generate Synthetic Data Using OpenAI, MistralAI or AnthropicAI☆222Updated last year
- Train an adapter for any embedding model in under a minute☆126Updated 5 months ago
- Mistral + Haystack: build RAG pipelines that rock 🤘☆105Updated last year
- ☆207Updated last year
- RAG example using DSPy, Gradio, FastAPI☆85Updated last year