teabranch / open-responses-serverLinks
Wraps any OpenAI API interface as Responses with MCPs support so it supports Codex. Adding any missing stateful features. Ollama and Vllm compliant.
☆149Updated 3 months ago
Alternatives and similar repositories for open-responses-server
Users that are interested in open-responses-server are comparing it to the libraries listed below
Sorting:
- InferX: Inference as a Service Platform☆156Updated this week
- High-performance lightweight proxy and load balancer for LLM infrastructure. Intelligent routing, automatic failover and unified model di…☆137Updated last week
- llmbasedos — Local-First OS Where Your AI Agents Wake Up and Work☆282Updated last month
- Since OpenAI and friends refuse to give us a max_ctx param in /models, here's the current context window, input token and output token li…☆65Updated last month
- A simple tool to anonymize LLM prompts.☆66Updated last year
- This is a cross-platform desktop application that allows you to chat with locally hosted LLMs and enjoy features like MCP support☆227Updated 6 months ago
- Serving LLMs in the HF-Transformers format via a PyFlask API☆72Updated last year
- Enhancing LLMs with LoRA☆206Updated 3 months ago
- ☆64Updated last year
- Magg: The MCP Aggregator☆128Updated 6 months ago
- A lightweight Agentic AI framework which works for Mac/Linux/WSL☆44Updated 6 months ago
- ☆209Updated last month
- Shared Memory Storage for Multi-Agent Systems☆139Updated 7 months ago
- An extension that lets the AI take the wheel, allowing it to use the mouse and keyboard, recognize UI elements, and prompt itself :3...no…☆127Updated last year
- An Open Source, Claude Code Like Tool, With RAG + Graph RAG + MCP Integration, and Supports Most LLMs (Incomplete But Functional & Usable…☆125Updated 6 months ago
- git-like rag pipeline☆256Updated last month
- Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on differe…☆88Updated last week
- A python package for serving LLM on OpenAI-compatible API endpoints with prompt caching using MLX.☆100Updated 7 months ago
- 🗣️ Real‑time, low‑latency voice, vision, and conversational‑memory AI assistant built on LiveKit and local LLMs ✨☆107Updated 7 months ago
- ☆178Updated 6 months ago
- ☆134Updated last month
- Compose, manage, and run MCP servers as Docker containers. With a Unified API gateway built in.☆53Updated 4 months ago
- A MCP server allowing LLM agents to easily connect and retrieve data from any database☆99Updated 6 months ago
- ☆64Updated 7 months ago
- Llama.cpp runner/swapper and proxy that emulates LMStudio / Ollama backends☆51Updated 5 months ago
- ☆205Updated 5 months ago
- ☆24Updated last year
- fully local, temporally aware natural language file search on your pc! even without a GPU. find relevant files using natural language i…☆168Updated last month
- No-messing-around sh client for llama.cpp's server☆30Updated last year
- Documentation site for fast-agent☆28Updated last week