The Easiest Rust Interface for Local LLMs and an Interface for Deterministic Signals from Probabilistic LLM Vibes
☆248Aug 6, 2025Updated 8 months ago
Alternatives and similar repositories for llm_client
Users that are interested in llm_client are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- llm_utils: Basic LLM tools, best practices, and minimal abstraction.☆48Feb 18, 2025Updated last year
- Rust library for vector embeddings and reranking.☆848Updated this week
- A simple, CUDA or CPU powered, library for creating vector embeddings using Candle and models from Hugging Face☆48May 3, 2024Updated last year
- A high-performance constrained decoding engine based on context free grammar in Rust☆58May 22, 2025Updated 10 months ago
- ☆535Updated this week
- GPUs on demand by Runpod - Special Offer Available • AdRun AI, ML, and HPC workloads on powerful cloud GPUs—without limits or wasted spend. Deploy GPUs in under a minute and pay by the second.
- AI Assistant☆20Feb 21, 2026Updated last month
- Run Generative AI models directly on your hardware☆42Aug 7, 2024Updated last year
- Efficent platform for inference and serving local LLMs including an OpenAI compatible API server.☆636Apr 3, 2026Updated last week
- High-level, optionally asynchronous Rust bindings to llama.cpp☆245Jun 5, 2024Updated last year
- This project is a web-based LLM (Large Language Model) chat tool developed using Rust, the Dioxus framework, and the Candle framework. It…☆102Aug 3, 2024Updated last year
- Fast, flexible LLM inference☆6,928Apr 9, 2026Updated last week
- LLM Orchestrator built in Rust☆285Mar 14, 2024Updated 2 years ago
- Instant, controllable, local pre-trained AI models in Rust☆2,179Mar 28, 2026Updated 2 weeks ago
- LLaMA2 + Rust☆13Aug 8, 2023Updated 2 years ago
- Bare Metal GPUs on DigitalOcean Gradient AI • AdPurpose-built for serious AI teams training foundational models, running large-scale inference, and pushing the boundaries of what's possible.
- Minimal LLM inference in Rust☆1,036Oct 24, 2024Updated last year
- The hearth of The Pulsar App, fast, secure and shared inference with modern UI☆59Dec 1, 2024Updated last year
- Common stop words in a variety of languages☆26Feb 21, 2026Updated last month
- Yet another `llama.cpp` Rust wrapper☆12Jun 19, 2024Updated last year
- Fast, streaming indexing, query, and agentic LLM applications in Rust☆687Apr 8, 2026Updated last week
- ☆24Jan 22, 2025Updated last year
- A simple and easy-to-use library for interacting with the Ollama API.☆1,013Apr 6, 2026Updated last week
- Split text into semantic chunks, up to a desired chunk size. Supports calculating length by characters and tokens, and is callable from R…☆584Updated this week
- Keyword extraction algorithms in Rust☆31Oct 12, 2024Updated last year
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- LLama.cpp rust bindings☆419Jun 27, 2024Updated last year
- Rust bindings to https://github.com/k2-fsa/sherpa-onnx☆307Mar 8, 2026Updated last month
- Modern, fast, document parser written in 🦀☆589Feb 18, 2026Updated last month
- Kheish: A multi-role LLM agent for tasks like code auditing, file searching, and more seamlessly leveraging RAG and extensible modules.☆143Dec 28, 2024Updated last year
- Rust implementation of Ultralytics YOLOv8/v10 using ONNX (ort)☆39Mar 5, 2026Updated last month
- User-space TCP/IP stack☆39Jan 23, 2026Updated 2 months ago
- A centralized platform for managing and connecting MCP servers. MCP Center provides a high-performance proxy service that enables seamles…☆40Sep 10, 2025Updated 7 months ago
- Graph model execution API for Candle☆17Jul 27, 2025Updated 8 months ago
- 🦜️🔗LangChain for Rust, the easiest way to write LLM-based programs in Rust☆1,280Apr 8, 2026Updated last week
- Wordpress hosting with auto-scaling - Free Trial • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.