nktice / AMD-AILinks
AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1
☆212Updated 2 weeks ago
Alternatives and similar repositories for AMD-AI
Users that are interested in AMD-AI are comparing it to the libraries listed below
Sorting:
- AI Inferencing at the Edge. A simple one-file way to run various GGML models with KoboldAI's UI with AMD ROCm offloading☆707Updated 3 weeks ago
- ☆411Updated 7 months ago
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆236Updated this week
- A daemon that automatically manages the performance states of NVIDIA GPUs.☆97Updated last week
- Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆403Updated this week
- build scripts for ROCm☆187Updated last year
- llama.cpp fork with additional SOTA quants and improved performance☆1,296Updated this week
- ☆234Updated 2 years ago
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆535Updated this week
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆552Updated last week
- The official API server for Exllama. OAI compatible, lightweight, and fast.☆1,077Updated this week
- Stable Diffusion Docker image preconfigured for usage with AMD Radeon cards☆140Updated last year
- Web UI for ExLlamaV2☆511Updated 9 months ago
- Docker variants of oobabooga's text-generation-webui, including pre-built images.☆440Updated last week
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.☆165Updated last year
- Reliable model swapping for any local OpenAI compatible server - llama.cpp, vllm, etc☆1,820Updated this week
- AMD Ryzen™ AI Software includes the tools and runtime libraries for optimizing and deploying AI inference on AMD Ryzen™ AI powered PCs.☆683Updated 2 weeks ago
- A complete package that provides you with all the components needed to get started of dive deeper into Machine Learning Workloads on Cons…☆40Updated 2 weeks ago
- Dolphin System Messages☆356Updated 8 months ago
- Linux based GDDR6/GDDR6X VRAM temperature reader for NVIDIA RTX 3000/4000 series GPUs.☆104Updated 6 months ago
- Prometheus exporter for Linux based GDDR6/GDDR6X VRAM and GPU Core Hot spot temperature reader for NVIDIA RTX 3000/4000 series GPUs.☆23Updated last year
- 8-bit CUDA functions for PyTorch☆66Updated last month
- General Site for the GFX803 ROCm Stuff☆123Updated 2 months ago
- Prebuilt Windows ROCm Libs for gfx1031 and gfx1032☆164Updated 7 months ago
- SHARK Studio -- Web UI for SHARK+IREE High Performance Machine Learning Distribution☆1,449Updated 7 months ago
- Core, Junction, and VRAM temperature reader for Linux + GDDR6/GDDR6X GPUs☆56Updated 2 weeks ago
- ☆85Updated last month
- ☆489Updated this week
- vLLM for AMD gfx906 GPUs, e.g. Radeon VII / MI50 / MI60☆311Updated last month
- 8-bit CUDA functions for PyTorch Rocm compatible☆41Updated last year