kyuz0 / amd-strix-halo-toolboxesLinks
☆612Updated this week
Alternatives and similar repositories for amd-strix-halo-toolboxes
Users that are interested in amd-strix-halo-toolboxes are comparing it to the libraries listed below
Sorting:
- ☆154Updated last month
- Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆488Updated last week
- Reliable model swapping for any local OpenAI/Anthropic compatible server - llama.cpp, vllm, etc☆1,977Updated last week
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆260Updated last week
- Lemonade helps users run local LLMs with the highest performance by configuring state-of-the-art inference engines for their NPUs and GPU…☆1,827Updated this week
- Fresh builds of llama.cpp with AMD ROCm™ 7 acceleration☆129Updated this week
- AI Cluster deployed with Ansible on Random computers with random capabilities☆283Updated last week
- Run LLM Agents on Ryzen AI PCs in Minutes☆792Updated this week
- ☆1,215Updated this week
- llama.cpp fork with additional SOTA quants and improved performance☆1,358Updated last week
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆216Updated last week
- Linux distro for AI computers. Go from bare-metal GPUs to running AI workloads - like vLLM, SGLang, RAG, and Agents - in minutes, fully a…☆319Updated last week
- LLM Fine Tuning Toolbox images for Ryzen AI 395+ Strix Halo☆36Updated 2 months ago
- A beautiful local-first coding agent running in your terminal - built by the community for the community ⚒☆929Updated this week
- ☆258Updated 6 months ago
- ☆228Updated 7 months ago
- ☆195Updated 3 months ago
- ☆99Updated 2 weeks ago
- RamaLama is an open-source developer tool that simplifies the local serving of AI models from any source and facilitates their use for in…☆2,357Updated this week
- High-performance lightweight proxy and load balancer for LLM infrastructure. Intelligent routing, automatic failover and unified model di…☆119Updated 2 weeks ago
- LLM Benchmark for Throughput via Ollama (Local LLMs)☆313Updated 3 months ago
- Web UI and API for managing MCP Orchestrator (mcpo) instances and configurations☆127Updated 6 months ago
- The AI toolkit for the AI developer☆1,097Updated this week
- LLM Client, Server API and UI☆400Updated last week
- Docs for GGUF quantization (unofficial)☆330Updated 4 months ago
- A multi-agent AI architecture that connects 25+ specialized agents through n8n and MCP servers. Project NOVA routes requests to domain-sp…☆242Updated 6 months ago
- Manifold is a platform for enabling workflow automation using AI assistants.☆468Updated last week
- Local AI voice assistant stack for Home Assistant (GPU-accelerated) with persistent memory, follow-up conversation, and Ollama model reco…☆217Updated 4 months ago
- A persistent local memory for AI, LLMs, or Copilot in VS Code.☆175Updated last month
- This is a cross-platform desktop application that allows you to chat with locally hosted LLMs and enjoy features like MCP support☆226Updated 3 months ago