l4rz / running-nvidia-sxm-gpus-in-consumer-pcsLinks
Running SXM2/SXM3/SXM4 NVidia data center GPUs in consumer PCs
☆125Updated 2 years ago
Alternatives and similar repositories for running-nvidia-sxm-gpus-in-consumer-pcs
Users that are interested in running-nvidia-sxm-gpus-in-consumer-pcs are comparing it to the libraries listed below
Sorting:
- LLM training in simple, raw C/HIP for AMD GPUs☆52Updated last year
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆73Updated 7 months ago
- A manual for helping using tesla p40 gpu☆130Updated 10 months ago
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆398Updated this week
- Make PyTorch models at least run on APUs.☆56Updated last year
- Linux based GDDR6/GDDR6X VRAM temperature reader for NVIDIA RTX 3000/4000 series GPUs.☆105Updated 5 months ago
- I've built a 4x V100 box for less than $5,500.☆143Updated 3 years ago
- ☆43Updated 5 months ago
- Reverse engineering the rk3588 npu☆95Updated last year
- ☆53Updated last year
- No-code CLI designed for accelerating ONNX workflows☆214Updated 3 months ago
- My develoopment fork of llama.cpp. For now working on RK3588 NPU and Tenstorrent backend☆108Updated last week
- build scripts for ROCm☆185Updated last year
- FORK of VLLM for AMD MI25/50/60. A high-throughput and memory-efficient inference and serving engine for LLMs☆64Updated 4 months ago
- AMD Ryzen™ AI Software includes the tools and runtime libraries for optimizing and deploying AI inference on AMD Ryzen™ AI powered PCs.☆641Updated last month
- Lightweight Inference server for OpenVINO☆211Updated this week
- A daemon that automatically manages the performance states of NVIDIA GPUs.☆96Updated 3 weeks ago
- Onboarding documentation source for the AMD Ryzen™ AI Software Platform. The AMD Ryzen™ AI Software Platform enables developers to take…☆78Updated this week
- ☆450Updated 5 months ago
- AMD related optimizations for transformer models☆88Updated last month
- NVIDIA Linux open GPU with P2P support☆54Updated this week
- Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on differe…☆82Updated last week
- Triton for AMD MI25/50/60. Development repository for the Triton language and compiler☆32Updated last week
- Deep Learning Primitives and Mini-Framework for OpenCL☆200Updated last year
- Inference code for LLaMA models☆42Updated 2 years ago
- 8-bit CUDA functions for PyTorch, ported to HIP for use in AMD GPUs☆51Updated 2 years ago
- Gpu benchmark☆68Updated 7 months ago
- A small OpenCL benchmark program to measure peak GPU/CPU performance.☆250Updated last week
- ☆231Updated 2 years ago
- Unlock vGPU functionality for consumer grade GPUs.☆93Updated 4 years ago