callbacked / qwen3-mcpLinks
An MCP-enabled Qwen3 0.6B demo with adjustable thinking budget, all in your browser!
☆25Updated 3 months ago
Alternatives and similar repositories for qwen3-mcp
Users that are interested in qwen3-mcp are comparing it to the libraries listed below
Sorting:
- A Multi-Agentic AI Assistant/Builder☆24Updated 3 weeks ago
- ☆24Updated 8 months ago
- The hearth of The Pulsar App, fast, secure and shared inference with modern UI☆57Updated 9 months ago
- Chat WebUI is an easy-to-use user interface for interacting with AI, and it comes with multiple useful built-in tools such as web search …☆45Updated 3 weeks ago
- ☆48Updated 6 months ago
- ☆13Updated 5 months ago
- A TTS model capable of generating ultra-realistic dialogue in one pass.☆31Updated 4 months ago
- A real-time shared memory layer for multi-agent LLM systems.☆48Updated 3 months ago
- Generate a llama-quantize command to copy the quantization parameters of any GGUF☆24Updated last month
- 🎮 Material You TUI for monitoring NVIDIA GPUs☆55Updated 3 months ago
- Attend - to what matters.☆17Updated 7 months ago
- Experience the power of AI with this free AI voice generator demo. Utilizing Deepgram and Groq, we transform text into voice seamlessly. …☆37Updated last year
- An F/OSS solution combining AI with Wikipedia knowledge via a RAG pipeline☆62Updated 8 months ago
- run ollama & gguf easily with a single command☆52Updated last year
- An fully autonomous agent that accesses the browser and performs tasks.☆17Updated 5 months ago
- AirLLM 70B inference with single 4GB GPU☆14Updated 3 months ago
- Make Qwen3 Think like Gemini 2.5 Pro | Open webui function☆23Updated 4 months ago
- ☆62Updated 2 months ago
- Yet Another (LLM) Web UI, made with Gemini☆12Updated 9 months ago
- ☆60Updated 3 months ago
- Complex RAG backend☆29Updated last year
- ☆20Updated last year
- Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on differe…☆82Updated 2 weeks ago
- Local LLM inference & management server with built-in OpenAI API☆31Updated last year
- LLM Ripper is a framework for component extraction (embeddings, attention heads, FFNs), activation capture, functional analysis, and adap…☆46Updated last week
- A python package for serving LLM on OpenAI-compatible API endpoints with prompt caching using MLX.☆95Updated 3 months ago
- A unified library for interacting with various AI APIs through a standardized interface.☆31Updated 6 months ago
- Your personal and private AI☆50Updated 5 months ago
- A local front-end for open-weight LLMs with memory, RAG, TTS/STT, Elo ratings, and dynamic research tools. Built with React and FastAPI.☆38Updated last month
- ☆17Updated 9 months ago