ROCm / TheRockLinks
The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm
☆514Updated this week
Alternatives and similar repositories for TheRock
Users that are interested in TheRock are comparing it to the libraries listed below
Sorting:
- AMD Ryzen™ AI Software includes the tools and runtime libraries for optimizing and deploying AI inference on AMD Ryzen™ AI powered PCs.☆677Updated last week
- ☆409Updated 6 months ago
- No-code CLI designed for accelerating ONNX workflows☆215Updated 4 months ago
- AI Tensor Engine for ROCm☆292Updated this week
- ☆481Updated this week
- llama.cpp fork with additional SOTA quants and improved performance☆1,277Updated this week
- Onboarding documentation source for the AMD Ryzen™ AI Software Platform. The AMD Ryzen™ AI Software Platform enables developers to take…☆83Updated this week
- 8-bit CUDA functions for PyTorch☆66Updated last month
- Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆378Updated this week
- A collection of examples for the ROCm software stack☆250Updated this week
- [DEPRECATED] Moved to ROCm/rocm-libraries repo☆114Updated this week
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆226Updated this week
- monorepo for rocm libraries☆158Updated this week
- HIPIFY: Convert CUDA to Portable C++ Code☆625Updated last week
- [DEPRECATED] Moved to ROCm/rocm-libraries repo☆388Updated this week
- Run LLM Agents on Ryzen AI PCs in Minutes☆684Updated last week
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆212Updated this week
- Development repository for the Triton language and compiler☆136Updated this week
- ☆152Updated this week
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆12Updated last year
- AMD's graph optimization engine.☆262Updated this week
- DLPrimitives/OpenCL out of tree backend for pytorch☆372Updated last year
- ☆126Updated this week
- chipStar is a tool for compiling and running HIP/CUDA on SPIR-V via OpenCL or Level Zero APIs.☆301Updated this week
- AMD SMI☆91Updated this week
- Fork of LLVM to support AMD AIEngine processors☆171Updated this week
- rocWMMA☆136Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆107Updated this week
- vLLM for AMD gfx906 GPUs, e.g. Radeon VII / MI50 / MI60☆307Updated 3 weeks ago
- a simple Flash Attention v2 implementation with ROCM (RDNA3 GPU, roc wmma), mainly used for stable diffusion(ComfyUI) in Windows ZLUDA en…☆48Updated last year