kyuz0 / amd-strix-halo-toolboxesLinks
☆129Updated this week
Alternatives and similar repositories for amd-strix-halo-toolboxes
Users that are interested in amd-strix-halo-toolboxes are comparing it to the libraries listed below
Sorting:
- ☆51Updated last week
- Lightweight Inference server for OpenVINO☆198Updated this week
- Lemonade helps users run local LLMs with the highest performance by configuring state-of-the-art inference engines for their NPUs and GPU…☆1,098Updated this week
- Model swapping for llama.cpp (or any local OpenAPI compatible server)☆1,370Updated this week
- Run LLMs on AMD Ryzen™ AI NPUs. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆129Updated last week
- llama.cpp fork with additional SOTA quants and improved performance☆1,096Updated this week
- Linux distro for AI computers. Go from bare-metal GPUs to running AI workloads - like vLLM, SGLang, RAG, and Agents - in minutes, fully a…☆240Updated last week
- AI Cluster deployed with Ansible on Random computers with random capabilities☆180Updated last week
- llmbasedos — Local-First OS Where Your AI Agents Wake Up and Work☆271Updated this week
- ☆221Updated 3 months ago
- Run LLM Agents on Ryzen AI PCs in Minutes☆518Updated last week
- Docs for GGUF quantization (unofficial)☆251Updated last month
- This is a cross-platform desktop application that allows you to chat with locally hosted LLMs and enjoy features like MCP support☆222Updated 2 weeks ago
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆209Updated 6 months ago
- LLM Benchmark for Throughput via Ollama (Local LLMs)☆280Updated 2 weeks ago
- ☆381Updated 4 months ago
- Code execution utilities for Open WebUI & Ollama☆297Updated 9 months ago
- A platform to self-host AI on easy mode☆159Updated 2 weeks ago
- reddacted lets you analyze & sanitize your online footprint using LLMs, PII detection & sentiment analysis to identify anything that migh…☆108Updated last month
- ☆168Updated last week
- A tool to determine whether or not your PC can run a given LLM☆164Updated 6 months ago
- InferX is a Inference Function as a Service Platform☆128Updated this week
- Lightweight & fast AI inference proxy for self-hosted LLMs backends like Ollama, LM Studio and others. Designed for speed, simplicity and…☆77Updated this week
- No-messing-around sh client for llama.cpp's server☆30Updated last year
- Manifold is a platform for enabling workflow automation using AI assistants.☆457Updated 3 weeks ago
- Make PyTorch models at least run on APUs.☆56Updated last year
- ☆28Updated 2 months ago
- Parse files (e.g. code repos) and websites to clipboard or a file for ingestions by AI / LLMs☆291Updated 2 weeks ago
- ☆162Updated 2 weeks ago
- Local LLM Powered Recursive Search & Smart Knowledge Explorer☆251Updated 6 months ago