lhl / strix-halo-testingLinks
☆114Updated last week
Alternatives and similar repositories for strix-halo-testing
Users that are interested in strix-halo-testing are comparing it to the libraries listed below
Sorting:
- ☆415Updated last week
- Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆378Updated this week
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆226Updated this week
- Fresh builds of llama.cpp with AMD ROCm™ 7 acceleration☆79Updated this week
- Linux distro for AI computers. Go from bare-metal GPUs to running AI workloads - like vLLM, SGLang, RAG, and Agents - in minutes, fully a…☆307Updated last month
- AI Cluster deployed with Ansible on Random computers with random capabilities☆252Updated last month
- ☆253Updated 4 months ago
- This is a cross-platform desktop application that allows you to chat with locally hosted LLMs and enjoy features like MCP support☆224Updated 2 months ago
- High-performance lightweight proxy and load balancer for LLM infrastructure. Intelligent routing, automatic failover and unified model di…☆107Updated last week
- llmbasedos — Local-First OS Where Your AI Agents Wake Up and Work☆276Updated 2 months ago
- GPU Power and Performance Manager☆60Updated last year
- Welcome!☆140Updated 10 months ago
- llama.cpp fork with additional SOTA quants and improved performance☆21Updated last week
- llm client, server and agent☆73Updated this week
- ☆226Updated 5 months ago
- A web application that converts speech to speech 100% private☆77Updated 4 months ago
- Lemonade helps users run local LLMs with the highest performance by configuring state-of-the-art inference engines for their NPUs and GPU…☆1,512Updated this week
- llama.cpp fork with additional SOTA quants and improved performance☆1,277Updated this week
- Simple AI/LLM benchmarking tools.☆148Updated last week
- ☆28Updated 4 months ago
- No-code CLI designed for accelerating ONNX workflows☆215Updated 4 months ago
- A platform to self-host AI on easy mode☆171Updated last week
- A comprehensive list of document parsers, covering PDF-to-text conversion and layout extraction. Each tested for support of tables, equat…☆160Updated 3 months ago
- A daemon that automatically manages the performance states of NVIDIA GPUs.☆96Updated last month
- reddacted lets you analyze & sanitize your online footprint using LLMs, PII detection & sentiment analysis to identify anything that migh…☆111Updated 3 months ago
- AMD APU compatible Ollama. Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models.☆116Updated last week
- Reliable model swapping for any local OpenAI compatible server - llama.cpp, vllm, etc☆1,764Updated this week
- ☆48Updated 2 weeks ago
- CoexistAI is a modular, developer-friendly research assistant framework . It enables you to build, search, summarize, and automate resear…☆320Updated last week
- LLM Benchmark for Throughput via Ollama (Local LLMs)☆303Updated 2 months ago