olealgoritme / gddr6Links
Linux based GDDR6/GDDR6X VRAM temperature reader for NVIDIA RTX 3000/4000 series GPUs.
☆107Updated 8 months ago
Alternatives and similar repositories for gddr6
Users that are interested in gddr6 are comparing it to the libraries listed below
Sorting:
- ☆48Updated 2 years ago
- Core, Junction, and VRAM temperature reader for Linux + GDDR6/GDDR6X GPUs☆64Updated 2 months ago
- ☆422Updated 9 months ago
- 8-bit CUDA functions for PyTorch Rocm compatible☆41Updated last year
- Make PyTorch models at least run on APUs.☆56Updated 2 years ago
- NVIDIA Linux open GPU with P2P support☆103Updated last month
- 8-bit CUDA functions for PyTorch☆69Updated 3 months ago
- build scripts for ROCm☆188Updated 2 years ago
- Prometheus exporter for Linux based GDDR6/GDDR6X VRAM and GPU Core Hot spot temperature reader for NVIDIA RTX 3000/4000 series GPUs.☆24Updated last year
- DLPrimitives/OpenCL out of tree backend for pytorch☆383Updated last month
- Fast and memory-efficient exact attention☆207Updated this week
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆690Updated this week
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆616Updated this week
- ☆236Updated 2 years ago
- ☆48Updated last month
- AMD related optimizations for transformer models☆96Updated 2 months ago
- NVIDIA Linux open GPU with P2P support☆1,310Updated 7 months ago
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆216Updated last month
- Running SXM2/SXM3/SXM4 NVidia data center GPUs in consumer PCs☆134Updated 2 years ago
- A daemon that automatically manages the performance states of NVIDIA GPUs.☆104Updated 2 months ago
- Benchmark your GPU with ease☆28Updated 2 weeks ago
- Stable Diffusion and Flux in pure C/C++☆24Updated this week
- ☆162Updated 6 months ago
- Lower Precision Floating Point Operations☆59Updated this week
- llama.cpp to PyTorch Converter☆35Updated last year
- ☆18Updated last year
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.☆165Updated last year
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆64Updated 2 years ago
- a simple Flash Attention v2 implementation with ROCM (RDNA3 GPU, roc wmma), mainly used for stable diffusion(ComfyUI) in Windows ZLUDA en…☆50Updated last year
- Simple monkeypatch to boost AMD Navi 3 GPUs☆48Updated 8 months ago