l4rz / running-nvidia-sxm-gpus-in-consumer-pcsLinks
Running SXM2/SXM3/SXM4 NVidia data center GPUs in consumer PCs
☆115Updated 2 years ago
Alternatives and similar repositories for running-nvidia-sxm-gpus-in-consumer-pcs
Users that are interested in running-nvidia-sxm-gpus-in-consumer-pcs are comparing it to the libraries listed below
Sorting:
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆234Updated this week
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆72Updated 5 months ago
- Linux based GDDR6/GDDR6X VRAM temperature reader for NVIDIA RTX 3000/4000 series GPUs.☆101Updated 2 months ago
- ☆54Updated last year
- LLM training in simple, raw C/HIP for AMD GPUs☆50Updated 9 months ago
- A manual for helping using tesla p40 gpu☆126Updated 8 months ago
- ☆31Updated 3 months ago
- NVIDIA Linux open GPU with P2P support☆25Updated last month
- Make PyTorch models at least run on APUs.☆54Updated last year
- FORK of VLLM for AMD MI25/50/60. A high-throughput and memory-efficient inference and serving engine for LLMs☆52Updated 2 months ago
- ☆356Updated 3 months ago
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆209Updated 4 months ago
- ☆35Updated last week
- ☆448Updated 3 months ago
- NVIDIA Linux open GPU with P2P support☆1,188Updated last month
- A daemon that automatically manages the performance states of NVIDIA GPUs.☆89Updated last month
- Lightweight Inference server for OpenVINO☆188Updated this week
- AMD related optimizations for transformer models☆80Updated 3 weeks ago
- All-in-Storage Solution based on DiskANN for DRAM-free Approximate Nearest Neighbor Search☆66Updated 2 weeks ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆87Updated this week
- No-code CLI designed for accelerating ONNX workflows☆201Updated last month
- I've built a 4x V100 box for less than $5,500.☆141Updated 3 years ago
- ☆430Updated this week
- Samples of good AI generated CUDA kernels☆84Updated last month
- Juice Community Version Public Release☆594Updated 2 months ago
- Hashed Lookup Table based Matrix Multiplication (halutmatmul) - Stella Nera accelerator☆211Updated last year
- build scripts for ROCm☆186Updated last year
- Ultra low overhead NVIDIA GPU telemetry plugin for telegraf with memory temperature readings.☆62Updated last year
- Fast and memory-efficient exact attention☆177Updated this week
- My develoopment fork of llama.cpp. For now working on RK3588 NPU and Tenstorrent backend☆97Updated 2 weeks ago