lhl / strix-halo-testingLinks
☆89Updated 3 weeks ago
Alternatives and similar repositories for strix-halo-testing
Users that are interested in strix-halo-testing are comparing it to the libraries listed below
Sorting:
- ☆313Updated last week
- Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆280Updated this week
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS over OpenAI endpoints.☆211Updated this week
- Linux distro for AI computers. Go from bare-metal GPUs to running AI workloads - like vLLM, SGLang, RAG, and Agents - in minutes, fully a…☆297Updated last month
- Fresh builds of llama.cpp with AMD ROCm™ 7 acceleration☆58Updated this week
- Lightweight & fast AI inference proxy for self-hosted LLMs backends like Ollama, LM Studio and others. Designed for speed, simplicity and…☆92Updated last week
- llama.cpp fork with additional SOTA quants and improved performance☆21Updated last week
- A lightweight UI for chatting with Ollama models. Streaming responses, conversation history, and multi-model support.☆122Updated 6 months ago
- Run LLM Agents on Ryzen AI PCs in Minutes☆639Updated last week
- Generate and execute command line commands using LLM☆49Updated 7 months ago
- reddacted lets you analyze & sanitize your online footprint using LLMs, PII detection & sentiment analysis to identify anything that migh…☆108Updated 2 months ago
- ☆252Updated 4 months ago
- This is a cross-platform desktop application that allows you to chat with locally hosted LLMs and enjoy features like MCP support☆224Updated last month
- Tiny truly local voice-activated LLM Agent that runs on a Raspberry Pi☆187Updated last week
- GPU Power and Performance Manager☆61Updated 11 months ago
- FamilyBench evaluation tool for testing the relational reasoning capabilities of Large Language Models (LLMs).☆36Updated 2 weeks ago
- No-code CLI designed for accelerating ONNX workflows☆214Updated 3 months ago
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆438Updated this week
- AMD APU compatible Ollama. Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models.☆108Updated this week
- AI Cluster deployed with Ansible on Random computers with random capabilities☆235Updated last month
- A platform to self-host AI on easy mode☆171Updated last week
- Chanakya is an advanced, open-source, and self-hostable voice assistant designed for privacy, power, and flexibility. It leverages local …☆160Updated 3 weeks ago
- World's first AMD NPU driver for TUXEDO laptops - Enable AI acceleration on Linux☆34Updated 2 months ago
- Simple ollama benchmarking tool.☆140Updated last week
- llmbasedos — Local-First OS Where Your AI Agents Wake Up and Work☆279Updated last month
- A cross patform app that unlocks your devices Gen AI capabilities☆64Updated 3 weeks ago
- A web application that converts speech to speech 100% private☆76Updated 4 months ago
- Welcome!☆140Updated 9 months ago
- llama.cpp fork with additional SOTA quants and improved performance☆1,246Updated this week
- ☆90Updated this week