l4rz / running-nvidia-sxm-gpus-in-consumer-pcsLinks
Running SXM2/SXM3/SXM4 NVidia data center GPUs in consumer PCs
☆126Updated 2 years ago
Alternatives and similar repositories for running-nvidia-sxm-gpus-in-consumer-pcs
Users that are interested in running-nvidia-sxm-gpus-in-consumer-pcs are comparing it to the libraries listed below
Sorting:
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆535Updated this week
- Linux based GDDR6/GDDR6X VRAM temperature reader for NVIDIA RTX 3000/4000 series GPUs.☆104Updated 6 months ago
- A manual for helping using tesla p40 gpu☆135Updated 11 months ago
- Make PyTorch models at least run on APUs.☆57Updated last year
- LLM training in simple, raw C/HIP for AMD GPUs☆53Updated last year
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆74Updated 9 months ago
- ☆44Updated last month
- NVIDIA Linux open GPU with P2P support☆1,272Updated 5 months ago
- build scripts for ROCm☆187Updated last year
- Ultra low overhead NVIDIA GPU telemetry plugin for telegraf with memory temperature readings.☆63Updated last year
- ☆52Updated last year
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆212Updated 2 weeks ago
- Repository of model demos using TT-Buda☆63Updated 7 months ago
- ☆42Updated last week
- ☆448Updated 7 months ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆92Updated this week
- Tiny ASIC implementation for "The Era of 1-bit LLMs All Large Language Models are in 1.58 Bits" matrix multiplication unit☆169Updated last year
- A simple script that works around Nvidia vGPU licensing with a scheduled task.☆254Updated 3 years ago
- Reverse engineering the rk3588 npu☆99Updated last year
- AMD related optimizations for transformer models☆94Updated 3 weeks ago
- AMD Ryzen™ AI Software includes the tools and runtime libraries for optimizing and deploying AI inference on AMD Ryzen™ AI powered PCs.☆683Updated 2 weeks ago
- A daemon that automatically manages the performance states of NVIDIA GPUs.☆97Updated last week
- NVIDIA Linux open GPU with P2P support☆70Updated 3 weeks ago
- No-code CLI designed for accelerating ONNX workflows☆216Updated 4 months ago
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆236Updated this week
- ctypes wrappers for HIP, CUDA, and OpenCL☆130Updated last year
- Nvidia Instruction Set Specification Generator☆297Updated last year
- ☆489Updated this week
- Inference code for LLaMA models☆42Updated 2 years ago
- Hashed Lookup Table based Matrix Multiplication (halutmatmul) - Stella Nera accelerator☆214Updated last year