lhl / strix-halo-testingLinks
☆154Updated last month
Alternatives and similar repositories for strix-halo-testing
Users that are interested in strix-halo-testing are comparing it to the libraries listed below
Sorting:
- ☆612Updated this week
- Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆518Updated this week
- Fresh builds of llama.cpp with AMD ROCm™ 7 acceleration☆129Updated this week
- Linux distro for AI computers. Go from bare-metal GPUs to running AI workloads - like vLLM, SGLang, RAG, and Agents - in minutes, fully a…☆319Updated last week
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆260Updated last week
- AI Cluster deployed with Ansible on Random computers with random capabilities☆283Updated last week
- Run LLM Agents on Ryzen AI PCs in Minutes☆792Updated this week
- Lemonade helps users run local LLMs with the highest performance by configuring state-of-the-art inference engines for their NPUs and GPU…☆1,827Updated this week
- llama.cpp fork with additional SOTA quants and improved performance☆1,358Updated last week
- LLM Client, Server API and UI☆400Updated last week
- LLM Fine Tuning Toolbox images for Ryzen AI 395+ Strix Halo☆36Updated 2 months ago
- High-performance lightweight proxy and load balancer for LLM infrastructure. Intelligent routing, automatic failover and unified model di…☆119Updated 2 weeks ago
- This is a cross-platform desktop application that allows you to chat with locally hosted LLMs and enjoy features like MCP support