santiagomed / orca
LLM Orchestrator built in Rust
β267Updated 8 months ago
Related projects β
Alternatives and complementary repositories for orca
- Efficent platform for inference and serving local LLMs including an OpenAI compatible API server.β265Updated last month
- π¦ A curated list of Rust tools, libraries, and frameworks for working with LLMs, GPT, AIβ286Updated 8 months ago
- Rust client for Qdrant vector search engineβ232Updated last month
- Fast, streaming indexing and query library for AI (RAG) applications, written in Rustβ257Updated this week
- An LLM interface (chat bot) implemented in pure Rust using HuggingFace/Candle over Axum Websockets, an SQLite Database, and a Leptos (Wasβ¦β122Updated last month
- Library for generating vector embeddings, reranking in Rustβ285Updated this week
- Tutorial for Porting PyTorch Transformer Models to Candle (Rust)β252Updated 3 months ago
- Rust client for the huggingface hub aiming for minimal subset of features over `huggingface-hub` python packageβ153Updated 2 months ago
- Inference Llama 2 in one file of pure Rust π¦β229Updated last year
- Llama2 LLM ported to Rust burnβ274Updated 7 months ago
- Low rank adaptation (LoRA) for Candle.β127Updated 3 months ago
- The Easiest Rust Interface for Local LLMs and an Interface for Deterministic Signals from Probabilistic LLM Vibesβ133Updated 3 weeks ago
- High-level, optionally asynchronous Rust bindings to llama.cppβ179Updated 5 months ago
- pgvector support for Rustβ121Updated last week
- Stable Diffusion v1.4 ported to Rust's burn frameworkβ316Updated last month
- Models and examples built with Burnβ185Updated this week
- LLama.cpp rust bindingsβ339Updated 4 months ago
- Rust multiprovider generative AI client (Ollama, OpenAi, Anthropic, Groq, Gemini, Cohere, ...)β207Updated this week
- π¦οΈπLangChain for Rust, the easiest way to write LLM-based programs in Rustβ621Updated this week
- Hybrid vector database with flexible SQL storage engine & multi-index support.β359Updated this week
- β162Updated this week
- Extract core logic from qdrant and make it available as a library.β56Updated 7 months ago
- π¦Rust + Large Language Models - Make AI Services Freely and Easily.β181Updated 8 months ago
- Library for doing RAGβ42Updated this week
- A Rust implementation of OpenAI's Whisper model using the burn frameworkβ270Updated 6 months ago
- OpenAI compatible API for serving LLAMA-2 modelβ215Updated last year
- A single-binary, GPU-accelerated LLM server (HTTP and WebSocket API) written in Rustβ80Updated 10 months ago
- Tera is an AI assistant which is tailored just for you and runs fully locally.β60Updated 8 months ago
- Rust+OpenCL+AVX2 implementation of LLaMA inference codeβ537Updated 9 months ago
- Cookbook to build Rust Candle modelsβ74Updated 11 months ago