alexhegit / Playing-with-ROCmLinks
See how to play with ROCm, run it with AMD GPUs!
☆38Updated 6 months ago
Alternatives and similar repositories for Playing-with-ROCm
Users that are interested in Playing-with-ROCm are comparing it to the libraries listed below
Sorting:
- AMD Ryzen™ AI Software includes the tools and runtime libraries for optimizing and deploying AI inference on AMD Ryzen™ AI powered PCs.☆698Updated 2 weeks ago
- My develoopment fork of llama.cpp. For now working on RK3588 NPU and Tenstorrent backend☆110Updated 3 weeks ago
- top-like script for rockhip NPUs on linux☆63Updated last month
- Automated script to convert Huggingface and GGUF models to rkllm format for running on Rockchip NPU☆38Updated last year
- Run Large Language Models on RK3588 with GPU-acceleration☆117Updated 2 years ago
- Easy installation and usage of Rockchip's NPUs found in RK3588 and similar SoCs☆213Updated 4 months ago
- Ollama alternative for Rockchip NPU: An efficient solution for running AI and Deep learning models on Rockchip devices with optimized NPU…☆355Updated this week
- Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆488Updated last week
- Easier usage of LLMs in Rockchip's NPU on SBCs like Orange Pi 5 and Radxa Rock 5 series☆163Updated 4 months ago
- Make PyTorch models at least run on APUs.☆56Updated last year
- vLLM for AMD gfx906 GPUs, e.g. Radeon VII / MI50 / MI60☆338Updated this week
- ☆49Updated 9 months ago
- Allows access via HTTP to LLM running on RK3588 NPU. Returns JSON response.☆28Updated last year
- Intel® NPU (Neural Processing Unit) Driver☆357Updated 2 weeks ago
- Streaming TTS based on Piper with optional RK3588 NPU support☆113Updated 7 months ago
- ☆496Updated this week
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆216Updated last week
- AMD related optimizations for transformer models☆96Updated last month
- ☆63Updated 6 months ago
- Benchmark llm performance☆108Updated last year
- Reverse engineering the rk3588 npu☆100Updated last year
- No-code CLI designed for accelerating ONNX workflows☆219Updated 5 months ago
- ☆154Updated last month
- Run LLM Agents on Ryzen AI PCs in Minutes☆792Updated this week
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆260Updated last week
- LM inference server implementation based on *.cpp.☆293Updated 2 weeks ago
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆613Updated this week
- LLM Benchmark for Throughput via Ollama (Local LLMs)☆313Updated 3 months ago
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆381Updated this week
- 8-bit CUDA functions for PyTorch☆68Updated 2 months ago