kyuz0 / amd-strix-halo-toolboxesLinks
☆226Updated last week
Alternatives and similar repositories for amd-strix-halo-toolboxes
Users that are interested in amd-strix-halo-toolboxes are comparing it to the libraries listed below
Sorting:
- ☆74Updated this week
- Lightweight Inference server for OpenVINO☆211Updated this week
- Run LLMs on AMD Ryzen™ AI NPUs. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆194Updated this week
- Model swapping for llama.cpp (or any local OpenAI API compatible server)☆1,499Updated last week
- AI Cluster deployed with Ansible on Random computers with random capabilities☆222Updated last week
- Lemonade helps users run local LLMs with the highest performance by configuring state-of-the-art inference engines for their NPUs and GPU…☆1,265Updated this week
- Run LLM Agents on Ryzen AI PCs in Minutes☆575Updated this week
- LLM Benchmark for Throughput via Ollama (Local LLMs)☆291Updated last month
- Linux distro for AI computers. Go from bare-metal GPUs to running AI workloads - like vLLM, SGLang, RAG, and Agents - in minutes, fully a…☆277Updated 2 weeks ago
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆210Updated 6 months ago
- InferX is a Inference Function as a Service Platform☆133Updated this week
- ☆176Updated last week
- Lightweight & fast AI inference proxy for self-hosted LLMs backends like Ollama, LM Studio and others. Designed for speed, simplicity and…☆87Updated this week
- Docs for GGUF quantization (unofficial)☆258Updated 2 months ago
- ☆223Updated 4 months ago
- Manifold is a platform for enabling workflow automation using AI assistants.☆459Updated last month
- This is a cross-platform desktop application that allows you to chat with locally hosted LLMs and enjoy features like MCP support☆224Updated last month
- Local AI voice assistant stack for Home Assistant (GPU-accelerated) with persistent memory, follow-up conversation, and Ollama model reco…☆197Updated last month
- GPU Power and Performance Manager☆61Updated 11 months ago
- Fresh builds of llama.cpp with AMD ROCm™ 7 acceleration☆40Updated this week
- A persistent local memory for AI, LLMs, or Copilot in VS Code.☆142Updated last week
- AMD APU compatible Ollama. Get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 3, Mistral Small 3.1 and other large language mod…☆96Updated this week
- A beautiful local-first coding agent running in your terminal - built by the community for the community ⚒☆400Updated this week
- The specification for the Universal Tool Calling Protocol☆213Updated this week
- A platform to self-host AI on easy mode☆163Updated this week
- ☆391Updated 5 months ago
- A tool to determine whether or not your PC can run a given LLM☆165Updated 7 months ago
- User-friendly AI Interface (Supports Ollama, OpenAI API, ...)☆102Updated 5 months ago
- llama.cpp fork with additional SOTA quants and improved performance☆21Updated this week
- llmbasedos — Local-First OS Where Your AI Agents Wake Up and Work☆272Updated 3 weeks ago