LlamaEdge / whisper-api-serverLinks
The easiest & fastest way to run customized and fine-tuned LLMs locally or on the edge
☆26Updated 10 months ago
Alternatives and similar repositories for whisper-api-server
Users that are interested in whisper-api-server are comparing it to the libraries listed below
Sorting:
- A RAG API server written in Rust following OpenAI specs☆60Updated 9 months ago
- An educational Rust project for exporting and running inference on Qwen3 LLM family☆38Updated 5 months ago
- wasm-interface-types supplement & compiler of wasmedge☆17Updated 2 years ago
- The MCP enterprise actors-based server or mcp-ectors for short☆31Updated 7 months ago
- AI Assistant☆20Updated 9 months ago
- Portable LLM - A rust library for LLM inference☆10Updated last year
- A simple, CUDA or CPU powered, library for creating vector embeddings using Candle and models from Hugging Face☆46Updated last year
- AI gateway and observability server written in Rust. Designed to help optimize multi-agent workflows.☆65Updated last year
- Build tools for LLMs in Rust using Model Context Protocol☆37Updated 10 months ago
- Embed WasmEdge functions in a Rust host app☆33Updated last year
- Use ChatGPT to review changed source code files GitHub Pull Requests☆25Updated last year
- A list of flow functions☆35Updated 2 years ago
- A crate for making MCP (Model Context Protocol) compatible programs with rust☆19Updated last year
- Rust implementation of Surya☆64Updated 10 months ago
- Super-simple, fully Rust powered "memory" (doc store + semantic search) for LLM projects, semantic search, etc.☆65Updated 2 years ago
- Use mcp to manage containerd(developing)☆52Updated 4 months ago
- 🦀 A Pure Rust Framework For Building AGI (WIP).☆111Updated last month
- An extensible CLI for integrating LLM models with a flexible scripting system☆22Updated last year
- Implementing the BitNet model in Rust☆44Updated last year
- The easiest & fastest way to run customized and fine-tuned LLMs locally or on the edge☆24Updated 10 months ago
- ChronoMind: Redefining Vector Intelligence Through Time.☆73Updated 8 months ago
- Built for demanding AI workflows, this gateway offers low-latency, provider-agnostic access, ensuring your AI applications run smoothly a…☆88Updated 7 months ago
- A single-binary, GPU-accelerated LLM server (HTTP and WebSocket API) written in Rust☆79Updated 2 years ago
- ☆256Updated 3 months ago
- bott: Your Terminal Copilot☆87Updated last year
- Build MCP servers with WebAssembly components☆64Updated last month
- This application demonstrates how to launch high-performance "serverless" functions from the YoMo framework to process streaming data. Th…☆66Updated 2 years ago
- Simple Rust applications that run in WasmEdge☆33Updated 2 years ago
- A Pure Rust based LLM (Any LLM based MLLM such as Spark-TTS) Inference Engine, powering by Candle framework.☆229Updated 3 weeks ago
- A collection of serverless apps that show how Fermyon's Serverless AI (currently in private beta) works. Reference: https://developer.fer…☆50Updated last year