AIAnytime / LLM-Inference-API-in-RustLinks
LLM Inference API in Rust. It also has a streamlit app that requests the running API in Rust.
☆20Updated 2 years ago
Alternatives and similar repositories for LLM-Inference-API-in-Rust
Users that are interested in LLM-Inference-API-in-Rust are comparing it to the libraries listed below
Sorting:
- ⚡️Lightning fast in-memory VectorDB written in rust🦀☆28Updated 9 months ago
- On-device LLM Inference using Mediapipe LLM Inference API.☆23Updated last year
- AI Assistant☆20Updated 8 months ago
- Built for demanding AI workflows, this gateway offers low-latency, provider-agnostic access, ensuring your AI applications run smoothly a…☆84Updated 6 months ago
- Ingest any document type and query☆13Updated 2 years ago
- A high-performance BPE tokenizer built with Rust with Python bindings, focused on speed, safety, and resource optimization.☆55Updated last week
- Model Context Protocol (MCP) CLI server template for Rust☆81Updated 8 months ago
- ☆45Updated last year
- VSCode Copilot for Groq fans!☆42Updated 5 months ago
- A simple, CUDA or CPU powered, library for creating vector embeddings using Candle and models from Hugging Face☆46Updated last year
- Simple Chainlit UI for running llms locally using Ollama and LangChain☆119Updated last year
- Rust OCR Microservice☆14Updated 2 years ago
- 🦀 A Pure Rust Framework For Building AGI (WIP).☆110Updated last month
- Web Application that can generate code and fix bugs and run using various LLM's (GPT,Gemini,PALM)☆126Updated last year
- Super-simple, fully Rust powered "memory" (doc store + semantic search) for LLM projects, semantic search, etc.☆64Updated 2 years ago
- Aitino is a platform that allows for the creation of crews of AI Agents to help you automate tasks and solve complex problems.☆89Updated 9 months ago
- 🤖📝 A markdown editor powered by AI (Ollama)☆64Updated last year
- auto-rust is an experimental project that automatically generate Rust code with LLM (Large Language Models) during compilation, utilizing…☆45Updated last year
- ⚡ Edgen: Local, private GenAI server alternative to OpenAI. No GPU required. Run AI models locally: LLMs (Llama2, Mistral, Mixtral...), …☆368Updated last year
- A quick Crew AI tutorial☆23Updated last year
- The MCP enterprise actors-based server or mcp-ectors for short☆32Updated 6 months ago
- Data extraction with LLM on CPU☆68Updated 2 years ago
- An LLM-powered, autonomous coding assistant. Also offers an MCP and ACP mode.☆123Updated this week
- Welcome to FluidAPI, it's a framework that allows you to interact with APIs using natural language. No more JSON, headers, or complex for…☆32Updated 2 months ago
- Ask shortgpt for instant and concise answers☆12Updated 2 years ago
- Here is a collection of cool applications that I've built with AssemblyAI☆35Updated last year
- Library for doing RAG☆79Updated last week
- AI gateway and observability server written in Rust. Designed to help optimize multi-agent workflows.☆64Updated last year
- Scrape Webpages with AI Vision☆50Updated 2 years ago
- AI Agents with Google's Gemini Pro and Gemini Pro Vision Models☆28Updated last year