olealgoritme / gddr6Links
Linux based GDDR6/GDDR6X VRAM temperature reader for NVIDIA RTX 3000/4000 series GPUs.
☆101Updated 2 months ago
Alternatives and similar repositories for gddr6
Users that are interested in gddr6 are comparing it to the libraries listed below
Sorting:
- 8-bit CUDA functions for PyTorch Rocm compatible☆41Updated last year
- 8-bit CUDA functions for PyTorch☆53Updated 3 weeks ago
- NVIDIA Linux open GPU with P2P support☆25Updated last month
- Core, Junction, and VRAM temperature reader for Linux + GDDR6/GDDR6X GPUs☆47Updated 2 months ago
- build scripts for ROCm☆186Updated last year
- Fast and memory-efficient exact attention☆177Updated this week
- ☆42Updated 2 years ago
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆436Updated this week
- ☆356Updated 3 months ago
- ☆139Updated 3 weeks ago
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆209Updated 4 months ago
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.☆156Updated last year
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆234Updated this week
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆12Updated last year
- ☆71Updated 6 months ago
- a simple Flash Attention v2 implementation with ROCM (RDNA3 GPU, roc wmma), mainly used for stable diffusion(ComfyUI) in Windows ZLUDA en…☆43Updated 10 months ago
- Simple monkeypatch to boost AMD Navi 3 GPUs☆43Updated 2 months ago
- Running SXM2/SXM3/SXM4 NVidia data center GPUs in consumer PCs☆115Updated 2 years ago
- ☆233Updated 2 years ago
- An efficent implementation of the method proposed in "The Era of 1-bit LLMs"☆153Updated 9 months ago
- llama.cpp fork with additional SOTA quants and improved performance☆652Updated this week
- Prometheus exporter for Linux based GDDR6/GDDR6X VRAM and GPU Core Hot spot temperature reader for NVIDIA RTX 3000/4000 series GPUs.☆21Updated 9 months ago
- ☆31Updated 3 months ago
- ☆17Updated 7 months ago
- Make PyTorch models at least run on APUs.☆54Updated last year
- ☆37Updated 2 years ago
- An innovative library for efficient LLM inference via low-bit quantization☆349Updated 10 months ago
- Gpu benchmark☆63Updated 5 months ago
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆87Updated this week
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆64Updated last year