nktice / AMD-AILinks
AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1
☆209Updated 6 months ago
Alternatives and similar repositories for AMD-AI
Users that are interested in AMD-AI are comparing it to the libraries listed below
Sorting:
- AI Inferencing at the Edge. A simple one-file way to run various GGML models with KoboldAI's UI with AMD ROCm offloading☆682Updated 2 weeks ago
- ☆381Updated 4 months ago
- llama.cpp fork with additional SOTA quants and improved performance☆1,111Updated this week
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆484Updated this week
- The official API server for Exllama. OAI compatible, lightweight, and fast.☆1,039Updated last week
- Lightweight Inference server for OpenVINO☆202Updated this week
- A daemon that automatically manages the performance states of NVIDIA GPUs.☆94Updated last week
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆331Updated this week
- Stable Diffusion Docker image preconfigured for usage with AMD Radeon cards☆138Updated last year
- ☆231Updated 2 years ago
- Run LLMs on AMD Ryzen™ AI NPUs. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆148Updated this week
- AMD Ryzen™ AI Software includes the tools and runtime libraries for optimizing and deploying AI inference on AMD Ryzen™ AI powered PCs.☆617Updated 2 weeks ago
- 8-bit CUDA functions for PyTorch Rocm compatible☆41Updated last year
- Web UI for ExLlamaV2☆510Updated 6 months ago
- Core, Junction, and VRAM temperature reader for Linux + GDDR6/GDDR6X GPUs☆52Updated 3 months ago
- 8-bit CUDA functions for PyTorch☆61Updated last week
- build scripts for ROCm☆185Updated last year
- Model swapping for llama.cpp (or any local OpenAI API compatible server)☆1,432Updated this week
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.☆162Updated last year
- LLM Benchmark for Throughput via Ollama (Local LLMs)☆286Updated 2 weeks ago
- A manual for helping using tesla p40 gpu☆129Updated 9 months ago
- Prebuilt Windows ROCm Libs for gfx1031 and gfx1032☆159Updated 5 months ago
- Docker variants of oobabooga's text-generation-webui, including pre-built images.☆436Updated last month
- Dolphin System Messages☆346Updated 6 months ago
- Prometheus exporter for Linux based GDDR6/GDDR6X VRAM and GPU Core Hot spot temperature reader for NVIDIA RTX 3000/4000 series GPUs.☆22Updated 10 months ago
- Linux based GDDR6/GDDR6X VRAM temperature reader for NVIDIA RTX 3000/4000 series GPUs.☆103Updated 4 months ago
- ☆82Updated this week
- ☆253Updated 2 months ago
- GPU Power and Performance Manager☆61Updated 10 months ago
- The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface. Now ZLUDA enhanced for better AMD GPU p…☆571Updated this week