nktice / AMD-AILinks
AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1
☆216Updated this week
Alternatives and similar repositories for AMD-AI
Users that are interested in AMD-AI are comparing it to the libraries listed below
Sorting:
- AI Inferencing at the Edge. A simple one-file way to run various GGML models with KoboldAI's UI with AMD ROCm offloading☆717Updated last week
- ☆417Updated 7 months ago
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆580Updated this week
- ☆235Updated 2 years ago
- AMD Ryzen™ AI Software includes the tools and runtime libraries for optimizing and deploying AI inference on AMD Ryzen™ AI powered PCs.☆695Updated last week
- Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆472Updated this week
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆247Updated 3 weeks ago
- A daemon that automatically manages the performance states of NVIDIA GPUs.☆100Updated 3 weeks ago
- build scripts for ROCm☆188Updated last year
- The official API server for Exllama. OAI compatible, lightweight, and fast.☆1,094Updated this week
- ☆495Updated this week
- Prometheus exporter for Linux based GDDR6/GDDR6X VRAM and GPU Core Hot spot temperature reader for NVIDIA RTX 3000/4000 series GPUs.☆24Updated last year
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆586Updated this week
- DEPRECATED!☆50Updated last year
- Core, Junction, and VRAM temperature reader for Linux + GDDR6/GDDR6X GPUs☆59Updated last month
- Stable Diffusion Docker image preconfigured for usage with AMD Radeon cards☆141Updated last year
- Docker variants of oobabooga's text-generation-webui, including pre-built images.☆441Updated 3 weeks ago
- llama.cpp fork with additional SOTA quants and improved performance☆1,341Updated this week
- 8-bit CUDA functions for PyTorch☆68Updated 2 months ago
- A complete package that provides you with all the components needed to get started of dive deeper into Machine Learning Workloads on Cons…☆42Updated last month
- ☆582Updated this week
- Reliable model swapping for any local OpenAI compatible server - llama.cpp, vllm, etc☆1,933Updated this week
- General Site for the GFX803 ROCm Stuff☆127Updated 3 months ago
- Run LLM Agents on Ryzen AI PCs in Minutes☆766Updated last week
- AMD APU compatible Ollama. Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models.☆121Updated last week
- Linux based GDDR6/GDDR6X VRAM temperature reader for NVIDIA RTX 3000/4000 series GPUs.☆106Updated 7 months ago
- Fresh builds of llama.cpp with AMD ROCm™ 7 acceleration☆112Updated last week
- LLM Benchmark for Throughput via Ollama (Local LLMs)☆312Updated 3 months ago
- A manual for helping using tesla p40 gpu☆138Updated last year
- Stable Diffusion v1.7.0 & v1.9.3 & v1.10.1 on RDNA2 RDNA3 AMD ROCm with Docker-compose☆13Updated last year