AIAnytime / LLM-Inference-API-in-RustLinks
LLM Inference API in Rust. It also has a streamlit app that requests the running API in Rust.
☆20Updated 2 years ago
Alternatives and similar repositories for LLM-Inference-API-in-Rust
Users that are interested in LLM-Inference-API-in-Rust are comparing it to the libraries listed below
Sorting:
- ⚡️Lightning fast in-memory VectorDB written in rust🦀☆30Updated 10 months ago
- Super-simple, fully Rust powered "memory" (doc store + semantic search) for LLM projects, semantic search, etc.☆65Updated 2 years ago
- The MCP enterprise actors-based server or mcp-ectors for short☆31Updated 8 months ago
- On-device LLM Inference using Mediapipe LLM Inference API.☆23Updated last year
- OpenAI compatible API for serving LLAMA-2 model☆218Updated 2 years ago
- A simple, CUDA or CPU powered, library for creating vector embeddings using Candle and models from Hugging Face☆47Updated last year
- Built for demanding AI workflows, this gateway offers low-latency, provider-agnostic access, ensuring your AI applications run smoothly a…☆89Updated 8 months ago
- Model Context Protocol (MCP) CLI server template for Rust☆81Updated 9 months ago
- Rust implementation of Surya☆65Updated 11 months ago
- AI Assistant☆20Updated 9 months ago
- Multi-platform desktop app to download and run Large Language Models(LLM) locally in your computer.☆290Updated 2 years ago
- 🦀 A Pure Rust Framework For Building AGI (WIP).☆111Updated 3 weeks ago
- Light WebUI for lm.rs☆24Updated last year
- An LLM interface (chat bot) implemented in pure Rust using HuggingFace/Candle over Axum Websockets, an SQLite Database, and a Leptos (Was…☆139Updated last year
- Simple Chainlit UI for running llms locally using Ollama and LangChain☆118Updated last year
- Data extraction with LLM on CPU☆68Updated 2 years ago
- 🤖📝 A markdown editor powered by AI (Ollama)☆64Updated last year
- Web Application that can generate code and fix bugs and run using various LLM's (GPT,Gemini,PALM)☆127Updated last year
- auto-rust is an experimental project that automatically generate Rust code with LLM (Large Language Models) during compilation, utilizing…☆46Updated last year
- Turn natual language into commands. Your CLI tasks, now as easy as a conversation. Run it 100% offline, or use OpenAI's models.☆63Updated last year
- A single-binary, GPU-accelerated LLM server (HTTP and WebSocket API) written in Rust☆79Updated 2 years ago
- ⚡ Edgen: Local, private GenAI server alternative to OpenAI. No GPU required. Run AI models locally: LLMs (Llama2, Mistral, Mixtral...), …☆369Updated last year
- Your Python AI Coder!☆36Updated 8 months ago
- Notebooks using the Neural Magic libraries 📓☆39Updated last year
- ☆140Updated last year
- Prototype app enabling job description search using natural language description of a job seeker.☆70Updated last year
- Ask shortgpt for instant and concise answers☆13Updated 2 years ago
- Function Calling Mistral 7B. Learn how to make functions call for open source LLMs.☆48Updated last year
- Generative AI web UI and server☆22Updated 2 years ago
- ☆41Updated 2 years ago