olealgoritme / gddr6Links
Linux based GDDR6/GDDR6X VRAM temperature reader for NVIDIA RTX 3000/4000 series GPUs.
☆107Updated 7 months ago
Alternatives and similar repositories for gddr6
Users that are interested in gddr6 are comparing it to the libraries listed below
Sorting:
- 8-bit CUDA functions for PyTorch☆69Updated 2 months ago
- Core, Junction, and VRAM temperature reader for Linux + GDDR6/GDDR6X GPUs☆61Updated last month
- ☆419Updated 8 months ago
- NVIDIA Linux open GPU with P2P support☆95Updated 2 weeks ago
- build scripts for ROCm☆188Updated last year
- ☆48Updated 2 years ago
- 8-bit CUDA functions for PyTorch Rocm compatible☆41Updated last year
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆597Updated last week
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆216Updated 3 weeks ago
- Fast and memory-efficient exact attention☆203Updated 2 weeks ago
- ☆48Updated last week
- Make PyTorch models at least run on APUs.☆56Updated 2 years ago
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.☆165Updated last year
- Deep Learning Primitives and Mini-Framework for OpenCL☆206Updated last year
- ☆18Updated last year
- Prometheus exporter for Linux based GDDR6/GDDR6X VRAM and GPU Core Hot spot temperature reader for NVIDIA RTX 3000/4000 series GPUs.☆24Updated last year
- Gpu benchmark☆73Updated 10 months ago
- DLPrimitives/OpenCL out of tree backend for pytorch☆383Updated 3 weeks ago
- Prepare for DeekSeek R1 inference: Benchmark CPU, DRAM, SSD, iGPU, GPU, ... with efficient code.☆73Updated 10 months ago
- Running SXM2/SXM3/SXM4 NVidia data center GPUs in consumer PCs☆132Updated 2 years ago
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆641Updated this week
- ☆236Updated 2 years ago
- AMD related optimizations for transformer models☆96Updated 2 months ago
- ☆499Updated this week
- Simple monkeypatch to boost AMD Navi 3 GPUs☆48Updated 7 months ago
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆64Updated 2 years ago
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆155Updated last year
- Benchmark your GPU with ease☆28Updated 6 months ago
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆12Updated last year
- REAP: Router-weighted Expert Activation Pruning for SMoE compression☆145Updated last week