kyuz0 / amd-strix-halo-toolboxesLinks
☆524Updated this week
Alternatives and similar repositories for amd-strix-halo-toolboxes
Users that are interested in amd-strix-halo-toolboxes are comparing it to the libraries listed below
Sorting:
- ☆138Updated 2 weeks ago
- Reliable model swapping for any local OpenAI compatible server - llama.cpp, vllm, etc☆1,862Updated last week
- Lemonade helps users run local LLMs with the highest performance by configuring state-of-the-art inference engines for their NPUs and GPU…☆1,622Updated this week
- Fresh builds of llama.cpp with AMD ROCm™ 7 acceleration☆103Updated this week
- Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆451Updated this week
- AI Cluster deployed with Ansible on Random computers with random capabilities☆273Updated 2 months ago
- llama.cpp fork with additional SOTA quants and improved performance☆1,329Updated this week
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆241Updated last week
- AMD APU compatible Ollama. Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models.☆120Updated this week
- Run LLM Agents on Ryzen AI PCs in Minutes☆744Updated this week
- Docs for GGUF quantization (unofficial)☆312Updated 4 months ago
- A beautiful local-first coding agent running in your terminal - built by the community for the community ⚒☆872Updated last week
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆216Updated 2 weeks ago
- ☆1,205Updated this week
- ☆414Updated 7 months ago
- Manifold is a platform for enabling workflow automation using AI assistants.☆464Updated this week
- OpenAPI Tool Servers☆749Updated last month
- A daemon that automatically manages the performance states of NVIDIA GPUs.☆97Updated 2 weeks ago
- Linux distro for AI computers. Go from bare-metal GPUs to running AI workloads - like vLLM, SGLang, RAG, and Agents - in minutes, fully a…☆315Updated 2 months ago
- ☆257Updated 5 months ago
- LLM Benchmark for Throughput via Ollama (Local LLMs)☆311Updated 3 months ago
- This is a cross-platform desktop application that allows you to chat with locally hosted LLMs and enjoy features like MCP support☆225Updated 3 months ago
- LLM Fine Tuning Toolbox images for Ryzen AI 395+ Strix Halo☆30Updated 2 months ago
- Make use of Intel Arc Series GPU to Run Ollama, StableDiffusion, Whisper and Open WebUI, for image generation, speech recognition and int…☆184Updated 5 months ago
- ☆226Updated 6 months ago
- LLM Client, Server API and UI☆393Updated this week
- VS Code extension for LLM-assisted code/text completion☆1,049Updated 2 weeks ago
- A complete package that provides you with all the components needed to get started of dive deeper into Machine Learning Workloads on Cons…☆40Updated 3 weeks ago
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆571Updated this week
- WilmerAI is one of the oldest LLM semantic routers. It uses multi-layer prompt routing and complex workflows to allow you to not only cre…☆786Updated last month