alexhegit / Playing-with-ROCmLinks
See how to play with ROCm, run it with AMD GPUs!
☆36Updated 5 months ago
Alternatives and similar repositories for Playing-with-ROCm
Users that are interested in Playing-with-ROCm are comparing it to the libraries listed below
Sorting:
- AMD Ryzen™ AI Software includes the tools and runtime libraries for optimizing and deploying AI inference on AMD Ryzen™ AI powered PCs.☆677Updated last week
- Easier usage of LLMs in Rockchip's NPU on SBCs like Orange Pi 5 and Radxa Rock 5 series☆156Updated 2 months ago
- top-like script for rockhip NPUs on linux☆58Updated 3 months ago
- Run Large Language Models on RK3588 with GPU-acceleration☆117Updated 2 years ago
- Easy installation and usage of Rockchip's NPUs found in RK3588 and similar SoCs☆200Updated 2 months ago
- My develoopment fork of llama.cpp. For now working on RK3588 NPU and Tenstorrent backend☆107Updated last week
- Ollama alternative for Rockchip NPU: An efficient solution for running AI and Deep learning models on Rockchip devices with optimized NPU…☆321Updated this week
- Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆378Updated this week
- ☆481Updated this week
- ☆409Updated 6 months ago
- ☆49Updated 8 months ago
- Make PyTorch models at least run on APUs.☆56Updated last year
- Automated script to convert Huggingface and GGUF models to rkllm format for running on Rockchip NPU☆36Updated 11 months ago
- General Site for the GFX803 ROCm Stuff☆120Updated 2 months ago
- Run LLM Agents on Ryzen AI PCs in Minutes☆684Updated last week
- vLLM for AMD gfx906 GPUs, e.g. Radeon VII / MI50 / MI60☆307Updated 3 weeks ago
- No-code CLI designed for accelerating ONNX workflows☆215Updated 4 months ago
- Allows access via HTTP to LLM running on RK3588 NPU. Returns JSON response.☆26Updated last year
- Streaming TTS based on Piper with optional RK3588 NPU support☆111Updated 6 months ago
- Intel® NPU (Neural Processing Unit) Driver☆326Updated this week
- Reverse engineering the rk3588 npu☆96Updated last year
- High-speed and easy-use LLM serving framework for local deployment☆130Updated 2 months ago
- Efficient Inference of Transformer models☆461Updated last year
- llama.cpp fork with additional SOTA quants and improved performance☆1,277Updated this week
- Add genai backend for ollama to run generative AI models using OpenVINO Runtime.☆18Updated 4 months ago
- LLM Benchmark for Throughput via Ollama (Local LLMs)☆303Updated 2 months ago
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆226Updated this week
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆514Updated this week
- LM inference server implementation based on *.cpp.☆286Updated 2 months ago
- ☆1,031Updated last month