lhl / strix-halo-testingLinks
☆173Updated 2 months ago
Alternatives and similar repositories for strix-halo-testing
Users that are interested in strix-halo-testing are comparing it to the libraries listed below
Sorting:
- ☆695Updated last week
- Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆572Updated this week
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆267Updated this week
- Fresh builds of llama.cpp with AMD ROCm™ 7 acceleration☆149Updated this week
- ☆86Updated last week
- Lemonade helps users discover and run local AI apps by serving optimized LLMs right from their own GPUs and NPUs. Join our discord: https…☆1,920Updated this week
- This is a cross-platform desktop application that allows you to chat with locally hosted LLMs and enjoy features like MCP support☆226Updated 4 months ago
- AI Cluster deployed with Ansible on Random computers with random capabilities☆303Updated 3 weeks ago
- High-performance lightweight proxy and load balancer for LLM infrastructure. Intelligent routing, automatic failover and unified model di…☆124Updated last week
- LLM Client, Server API and UI☆404Updated this week
- AMD APU compatible Ollama. Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models.☆138Updated last week
- llama.cpp fork with additional SOTA quants and improved performance☆1,407Updated this week
- llmbasedos — Local-First OS Where Your AI Agents Wake Up and Work☆279Updated 4 months ago
- Reliable model swapping for any local OpenAI/Anthropic compatible server - llama.cpp, vllm, etc☆2,086Updated last week
- Linux distro for AI computers. Go from bare-metal GPUs to running AI workloads - like vLLM, SGLang, RAG, and Agents - in minutes, fully a…☆326Updated last month
- LLM Fine Tuning Toolbox images for Ryzen AI 395+ Strix Halo☆39Updated 3 months ago
- Aggregates compute from spare GPU capacity☆182Updated last week