santiagomed / orca
LLM Orchestrator built in Rust
β267Updated 10 months ago
Alternatives and similar repositories for orca:
Users that are interested in orca are comparing it to the libraries listed below
- π¦ A curated list of Rust tools, libraries, and frameworks for working with LLMs, GPT, AIβ318Updated 10 months ago
- Fast, streaming indexing, query, and agent library for building LLM applications in Rustβ348Updated this week
- Efficent platform for inference and serving local LLMs including an OpenAI compatible API server.β289Updated this week
- Rust client for Qdrant vector search engineβ245Updated this week
- Rust library for generating vector embeddings, reranking locallyβ397Updated this week
- Tutorial for Porting PyTorch Transformer Models to Candle (Rust)β272Updated 5 months ago
- An LLM interface (chat bot) implemented in pure Rust using HuggingFace/Candle over Axum Websockets, an SQLite Database, and a Leptos (Wasβ¦β125Updated 3 months ago
- Rust client for the huggingface hub aiming for minimal subset of features over `huggingface-hub` python packageβ169Updated this week
- Low rank adaptation (LoRA) for Candle.β134Updated 4 months ago
- Llama2 LLM ported to Rust burnβ278Updated 9 months ago
- Inference Llama 2 in one file of pure Rust π¦β231Updated last year
- The Easiest Rust Interface for Local LLMs and an Interface for Deterministic Signals from Probabilistic LLM Vibesβ163Updated this week
- pgvector support for Rustβ140Updated 2 months ago
- In-memory vector store with efficient read and write performance for semantic caching and retrieval system. Redis for Semantic Caching.β358Updated last month
- Rust multiprovider generative AI client (Ollama, OpenAi, Anthropic, Groq, Gemini, Cohere, ...)β269Updated this week
- A Rust implementation of OpenAI's Whisper model using the burn frameworkβ284Updated 8 months ago
- Library for doing RAGβ51Updated last month
- π¦οΈπLangChain for Rust, the easiest way to write LLM-based programs in Rustβ716Updated this week
- Models and examples built with Burnβ203Updated last week
- Extract core logic from qdrant and make it available as a library.β57Updated 9 months ago
- π¦Rust + Large Language Models - Make AI Services Freely and Easily.β181Updated 10 months ago
- Tera is an AI assistant which is tailored just for you and runs fully locally.β72Updated 10 months ago
- LLama.cpp rust bindingsβ354Updated 6 months ago
- A simple Rust library for OpenAI API, free from complex async operations and redundant dependencies.β118Updated 6 months ago
- Ready-made tokenizer library for working with GPT and tiktokenβ277Updated last week
- Neural search for web-sites, docs, articles - online!β130Updated 2 months ago
- Use multiple LLM backends in a single crate, simple builder-based configuration, and built-in prompt chaining & templating.β92Updated this week
- OpenAI API client library for Rust (unofficial)β351Updated this week
- A single-binary, GPU-accelerated LLM server (HTTP and WebSocket API) written in Rustβ80Updated last year
- A comprehensive Rust translation of the code from Sebastian Raschka's Build an LLM from Scratch book.β58Updated this week