nktice / AMD-AILinks
AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1
☆212Updated this week
Alternatives and similar repositories for AMD-AI
Users that are interested in AMD-AI are comparing it to the libraries listed below
Sorting:
- ☆404Updated 6 months ago
- AI Inferencing at the Edge. A simple one-file way to run various GGML models with KoboldAI's UI with AMD ROCm offloading☆700Updated this week
- A daemon that automatically manages the performance states of NVIDIA GPUs.☆96Updated 3 weeks ago
- Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆321Updated last week
- The official API server for Exllama. OAI compatible, lightweight, and fast.☆1,068Updated this week
- 8-bit CUDA functions for PyTorch Rocm compatible☆41Updated last year
- ☆233Updated 2 years ago
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS over OpenAI endpoints.☆213Updated last week
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆532Updated this week
- A complete package that provides you with all the components needed to get started of dive deeper into Machine Learning Workloads on Cons…☆39Updated 3 weeks ago
- AMD Ryzen™ AI Software includes the tools and runtime libraries for optimizing and deploying AI inference on AMD Ryzen™ AI powered PCs.☆665Updated this week
- Web UI for ExLlamaV2☆510Updated 8 months ago
- Docker variants of oobabooga's text-generation-webui, including pre-built images.☆439Updated 3 months ago
- llama.cpp fork with additional SOTA quants and improved performance☆1,258Updated this week
- Core, Junction, and VRAM temperature reader for Linux + GDDR6/GDDR6X GPUs☆54Updated 5 months ago
- Model swapping for llama.cpp (or any local OpenAI API compatible server)☆1,690Updated this week
- Fresh builds of llama.cpp with AMD ROCm™ 7 acceleration☆74Updated this week
- A manual for helping using tesla p40 gpu☆134Updated 11 months ago
- Stable Diffusion Docker image preconfigured for usage with AMD Radeon cards☆139Updated last year
- General Site for the GFX803 ROCm Stuff☆119Updated last month
- ☆83Updated 2 weeks ago
- Linux based GDDR6/GDDR6X VRAM temperature reader for NVIDIA RTX 3000/4000 series GPUs.☆104Updated 5 months ago
- Make PyTorch models at least run on APUs.☆56Updated last year
- ☆477Updated this week
- Prometheus exporter for Linux based GDDR6/GDDR6X VRAM and GPU Core Hot spot temperature reader for NVIDIA RTX 3000/4000 series GPUs.☆23Updated last year
- An OpenAI API compatible text to speech server using Coqui AI's xtts_v2 and/or piper tts as the backend.☆819Updated 8 months ago
- DEPRECATED!☆50Updated last year
- ROCm docker images with fixes/support for legecy architecture gfx803. eg.Radeon RX 590/RX 580/RX 570/RX 480☆76Updated 5 months ago
- Dolphin System Messages☆351Updated 8 months ago
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.☆165Updated last year