nikos230 / Run-Pytorch-with-AMD-Radeon-GPULinks
Complete Guide how to run Pytorch with AMD rx460,470,480 (gfx803) GPUs
☆52Updated last year
Alternatives and similar repositories for Run-Pytorch-with-AMD-Radeon-GPU
Users that are interested in Run-Pytorch-with-AMD-Radeon-GPU are comparing it to the libraries listed below
Sorting:
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆290Updated last week
- Make PyTorch models at least run on APUs.☆56Updated 2 years ago
- Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆689Updated this week
- A complete package that provides you with all the components needed to get started of dive deeper into Machine Learning Workloads on Cons…☆44Updated this week
- Running SXM2/SXM3/SXM4 NVidia data center GPUs in consumer PCs☆138Updated 2 years ago
- ☆159Updated last year
- A manual for helping using tesla p40 gpu☆142Updated last year
- Limbo for Tensor is a QEMU-based Hypervisor for Tensor-based Google Pixel devices.☆231Updated 8 months ago
- Produce your own Dynamic 3.0 Quants and achieve optimum accuracy & SOTA quantization performance! Input your VRAM and RAM and the toolcha…☆76Updated last week
- ☆195Updated 3 months ago
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆217Updated this week
- NVIDIA Linux open GPU with P2P support☆1,316Updated 8 months ago
- ☆191Updated last year
- ☆123Updated this week
- Making Apple Watch functional with Android☆107Updated last year
- Correctly setup LXC for Termux☆72Updated last year
- Fast inference engine for Transformer models☆56Updated last year
- Fresh builds of llama.cpp with AMD ROCm™ 7 acceleration☆176Updated last week
- ☆11Updated last month
- Add genai backend for ollama to run generative AI models using OpenVINO Runtime.☆22Updated 7 months ago
- Guides left to us by the legendary SmokelessCPU before he decided to drop off the internet☆46Updated 8 months ago
- ☆451Updated 10 months ago
- ☆66Updated last year
- LLM Benchmark for Throughput via Ollama (Local LLMs)☆329Updated 2 weeks ago
- Juice Community Version Public Release☆628Updated 8 months ago
- A tool to determine whether or not your PC can run a given LLM☆167Updated last year
- Code sample showing how to run and benchmark models on Qualcomm's Window PCs☆104Updated last year
- Build AI agents for your PC☆906Updated last week
- Privacy-first agentic framework with powerful reasoning & task automation capabilities. Natively distributed and fully ISO 27XXX complian…☆68Updated 10 months ago
- ☆184Updated 3 months ago