lhl / strix-halo-testingLinks
☆186Updated 2 months ago
Alternatives and similar repositories for strix-halo-testing
Users that are interested in strix-halo-testing are comparing it to the libraries listed below
Sorting:
- ☆770Updated this week
- ☆105Updated 3 weeks ago
- Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆637Updated this week
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆274Updated this week
- Fresh builds of llama.cpp with AMD ROCm™ 7 acceleration☆164Updated this week
- LLM Fine Tuning Toolbox images for Ryzen AI 395+ Strix Halo☆42Updated 4 months ago
- llmbasedos — Local-First OS Where Your AI Agents Wake Up and Work☆279Updated 2 weeks ago
- AI Cluster deployed with Ansible on Random computers with random capabilities☆312Updated last month
- Lemonade helps users discover and run local AI apps by serving optimized LLMs right from their own GPUs and NPUs. Join our discord: https…☆2,008Updated this week
- llama.cpp fork with additional SOTA quants and improved performance☆1,511Updated this week
- High-performance lightweight proxy and load balancer for LLM infrastructure. Intelligent routing, automatic failover and unified model di…☆127Updated this week
- LLM Client, Server API and UI☆413Updated this week
- ☆257Updated 7 months ago
- Docs for GGUF quantization (unofficial)☆348Updated 6 months ago
- Linux distro for AI computers. Go from bare-metal GPUs to running AI workloads - like vLLM, SGLang, RAG, and Agents - in minutes, fully a…☆330Updated last month
- This is a cross-platform desktop application that allows you to chat with locally hosted LLMs and enjoy features like MCP support☆225Updated 5 months ago
- General Tool-calling API Proxy☆55Updated 5 months ago
- InferX: Inference as a Service Platform☆146Updated this week
- ☆229Updated 8 months ago
- ☆136Updated this week
- Aggregates compute from spare GPU capacity☆184Updated 2 weeks ago
- Build AI agents for your PC☆885Updated this week
- ☆204Updated 4 months ago
- reddacted lets you analyze & sanitize your online footprint using LLMs, PII detection & sentiment analysis to identify anything that migh…☆116Updated 5 months ago
- Simple AI/LLM benchmarking tools.☆185Updated last month
- Local AI voice assistant stack for Home Assistant (GPU-accelerated) with persistent memory, follow-up conversation, and Ollama model reco…☆225Updated 5 months ago
- AMD APU compatible Ollama. Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models.☆143Updated this week
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆216Updated last month
- Plug-and-play memory for LLMs in 3 lines of code. Add persistent, intelligent, human-like memory and recall to any model in minutes.☆244Updated last month
- LLM Benchmark for Throughput via Ollama (Local LLMs)☆323Updated this week