lhl / strix-halo-testingLinks
☆51Updated last week
Alternatives and similar repositories for strix-halo-testing
Users that are interested in strix-halo-testing are comparing it to the libraries listed below
Sorting:
- ☆129Updated this week
- Linux distro for AI computers. Go from bare-metal GPUs to running AI workloads - like vLLM, SGLang, RAG, and Agents - in minutes, fully a…☆240Updated last week
- AI Cluster deployed with Ansible on Random computers with random capabilities☆180Updated last week
- Lightweight Inference server for OpenVINO☆198Updated this week
- AMD APU compatible Ollama. Get up and running with Llama 3.3, DeepSeek-R1, Phi-4, Gemma 3, Mistral Small 3.1 and other large language mod…☆80Updated last week
- This is a cross-platform desktop application that allows you to chat with locally hosted LLMs and enjoy features like MCP support☆222Updated 2 weeks ago
- System Power Monitoring using Smart Plugs from the Terminal☆204Updated 4 months ago
- Lightweight & fast AI inference proxy for self-hosted LLMs backends like Ollama, LM Studio and others. Designed for speed, simplicity and…☆77Updated this week
- Welcome!☆140Updated 8 months ago
- A lightweight UI for chatting with Ollama models. Streaming responses, conversation history, and multi-model support.☆114Updated 5 months ago
- Mem0 Integration with OpenWebUI☆36Updated last week
- reddacted lets you analyze & sanitize your online footprint using LLMs, PII detection & sentiment analysis to identify anything that migh…☆108Updated last month
- Run LLMs on AMD Ryzen™ AI NPUs. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆129Updated last week
- Lemonade helps users run local LLMs with the highest performance by configuring state-of-the-art inference engines for their NPUs and GPU…☆1,098Updated this week
- A daemon that automatically manages the performance states of NVIDIA GPUs.☆93Updated 2 months ago
- ☆41Updated last week
- Fresh builds of llama.cpp with AMD ROCm™ 7 acceleration☆31Updated this week
- llmbasedos — Local-First OS Where Your AI Agents Wake Up and Work☆271Updated this week
- ☆253Updated 2 months ago
- Run LLM Agents on Ryzen AI PCs in Minutes☆518Updated last week
- A web application that converts speech to speech 100% private☆75Updated 2 months ago
- A prompt generator for my fork of the extended_openai_conversation HomeAssistant custom integration.☆12Updated 4 months ago
- ☆53Updated last year
- A modern, local first AI chat app for your browser.☆33Updated 3 weeks ago
- Generate and execute command line commands using LLM☆47Updated 6 months ago
- Connects MCP to major 3D printer APIs (Orca, Bambu, OctoPrint, Klipper, Duet, Repetier, Prusa, Creality). Control prints, monitor status,…☆84Updated 2 months ago
- An MCP server that provides persistent memory capabilities through a local knowledge graph, enabling AI assistants to maintain context ac…☆17Updated 3 months ago
- A DNS server that automatically discovers VMs and containers (LXCs) in your Proxmox cluster and makes them available via DNS☆54Updated 4 months ago
- Ollama client for iOS, Android, macOS, and Windows that simplifies experimenting with LLMs.☆203Updated this week
- Retrieval Augmented Generation based on SQLite☆312Updated this week