kyuz0 / amd-strix-halo-toolboxesLinks
☆313Updated last week
Alternatives and similar repositories for amd-strix-halo-toolboxes
Users that are interested in amd-strix-halo-toolboxes are comparing it to the libraries listed below
Sorting:
- Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆280Updated this week
- ☆89Updated 3 weeks ago
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS over OpenAI endpoints.☆211Updated this week
- Linux distro for AI computers. Go from bare-metal GPUs to running AI workloads - like vLLM, SGLang, RAG, and Agents - in minutes, fully a…☆297Updated last month
- AI Cluster deployed with Ansible on Random computers with random capabilities☆235Updated last month
- Run LLM Agents on Ryzen AI PCs in Minutes☆639Updated last week
- Lightweight & fast AI inference proxy for self-hosted LLMs backends like Ollama, LM Studio and others. Designed for speed, simplicity and…☆92Updated last week
- Lemonade helps users run local LLMs with the highest performance by configuring state-of-the-art inference engines for their NPUs and GPU…☆1,385Updated last week
- Fresh builds of llama.cpp with AMD ROCm™ 7 acceleration☆58Updated this week
- AMD APU compatible Ollama. Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models.☆108Updated this week
- LLM Benchmark for Throughput via Ollama (Local LLMs)☆297Updated last month
- This is a cross-platform desktop application that allows you to chat with locally hosted LLMs and enjoy features like MCP support☆224Updated last month
- Docs for GGUF quantization (unofficial)☆275Updated 2 months ago
- ☆178Updated last month
- A beautiful local-first coding agent running in your terminal - built by the community for the community ⚒☆512Updated this week
- llama.cpp fork with additional SOTA quants and improved performance☆1,246Updated this week
- OpenAPI Tool Servers☆693Updated 2 weeks ago
- A platform to self-host AI on easy mode☆171Updated last week
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆211Updated 3 weeks ago
- GPU Power and Performance Manager☆61Updated 11 months ago
- ☆225Updated 5 months ago
- ☆252Updated 4 months ago
- ☆396Updated 6 months ago
- InferX: Inference as a Service Platform☆136Updated this week
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆438Updated this week
- Wraps any OpenAI API interface as Responses with MCPs support so it supports Codex. Adding any missing stateful features. Ollama and Vllm…☆110Updated 3 months ago
- A persistent local memory for AI, LLMs, or Copilot in VS Code.☆154Updated last week
- OWUI tools and utilities☆55Updated 5 months ago
- ☆127Updated last week
- A multi-agent AI architecture that connects 25+ specialized agents through n8n and MCP servers. Project NOVA routes requests to domain-sp…☆223Updated 4 months ago