amd / RyzenAI-SWLinks
AMD Ryzen™ AI Software includes the tools and runtime libraries for optimizing and deploying AI inference on AMD Ryzen™ AI powered PCs.
☆698Updated 2 weeks ago
Alternatives and similar repositories for RyzenAI-SW
Users that are interested in RyzenAI-SW are comparing it to the libraries listed below
Sorting:
- ☆496Updated this week
- Intel® NPU (Neural Processing Unit) Driver☆357Updated 2 weeks ago
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆613Updated this week
- Intel® NPU Acceleration Library☆700Updated 7 months ago
- Onboarding documentation source for the AMD Ryzen™ AI Software Platform. The AMD Ryzen™ AI Software Platform enables developers to take…☆87Updated last week
- Run LLM Agents on Ryzen AI PCs in Minutes☆792Updated this week
- ☆418Updated 8 months ago
- No-code CLI designed for accelerating ONNX workflows☆219Updated 5 months ago
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆381Updated this week
- A collection of examples for the ROCm software stack☆262Updated last week
- build scripts for ROCm☆188Updated last year
- ☆144Updated 2 weeks ago
- OpenVINO Intel NPU Compiler☆74Updated last week
- 8-bit CUDA functions for PyTorch☆68Updated 2 months ago
- Fork of LLVM to support AMD AIEngine processors☆176Updated this week
- Dockerfiles for the various software layers defined in the ROCm software platform☆502Updated last week
- AI Tensor Engine for ROCm☆311Updated this week
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆216Updated last week
- HIPIFY: Convert CUDA to Portable C++ Code☆635Updated this week
- AI PC starter app for doing AI image creation, image stylizing, and chatbot on a PC powered by an Intel® Arc™ GPU.☆674Updated this week
- DLPrimitives/OpenCL out of tree backend for pytorch☆378Updated 2 weeks ago
- llama.cpp fork with additional SOTA quants and improved performance☆1,358Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆111Updated this week
- AMD related optimizations for transformer models☆96Updated last month
- See how to play with ROCm, run it with AMD GPUs!☆38Updated 6 months ago
- A small OpenCL benchmark program to measure peak GPU/CPU performance.☆268Updated 2 weeks ago
- Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆488Updated last week
- Low-bit LLM inference on CPU/NPU with lookup table☆898Updated 6 months ago
- My develoopment fork of llama.cpp. For now working on RK3588 NPU and Tenstorrent backend☆110Updated 3 weeks ago
- ☆155Updated this week