xor2k / gpu_undervoltLinks
☆42Updated 2 years ago
Alternatives and similar repositories for gpu_undervolt
Users that are interested in gpu_undervolt are comparing it to the libraries listed below
Sorting:
- Linux based GDDR6/GDDR6X VRAM temperature reader for NVIDIA RTX 3000/4000 series GPUs.☆104Updated 5 months ago
- Core, Junction, and VRAM temperature reader for Linux + GDDR6/GDDR6X GPUs☆54Updated 4 months ago
- Simple monkeypatch to boost AMD Navi 3 GPUs☆46Updated 5 months ago
- Stable Diffusion and Flux in pure C/C++☆21Updated 3 weeks ago
- Make PyTorch models at least run on APUs.☆56Updated last year
- Run stable-diffusion-webui with Radeon RX 580 8GB on Ubuntu 22.04.2 LTS☆66Updated last year
- ☆83Updated this week
- Prometheus exporter for Linux based GDDR6/GDDR6X VRAM and GPU Core Hot spot temperature reader for NVIDIA RTX 3000/4000 series GPUs.☆23Updated last year
- A daemon that automatically manages the performance states of NVIDIA GPUs.☆95Updated 2 weeks ago
- 8-bit CUDA functions for PyTorch Rocm compatible☆41Updated last year
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆211Updated 3 weeks ago
- NVIDIA Linux open GPU with P2P support☆59Updated 2 weeks ago
- SLOP Detector and analyzer based on dictionary for shareGPT JSON and text☆76Updated 11 months ago
- ☆399Updated 6 months ago
- Croco.Cpp is fork of KoboldCPP infering GGML/GGUF models on CPU/Cuda with KoboldAI's UI. It's powered partly by IK_LLama.cpp, and compati…☆147Updated this week
- My personal fork of koboldcpp where I hack in experimental samplers.☆46Updated last year
- ☆37Updated 2 years ago
- Stable Diffusion Docker image preconfigured for usage with AMD Radeon cards☆138Updated last year
- A library and CLI utilities for managing performance states of NVIDIA GPUs.☆29Updated last year
- GPU Power and Performance Manager☆61Updated 11 months ago
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.☆165Updated last year
- Input text from speech in any Linux window, the lean, fast and accurate way, using whisper.cpp OFFLINE. Speak with local LLMs via llama.c…☆142Updated 2 months ago
- llama-swap + a minimal ollama compatible api☆28Updated this week
- A zero dependency web UI for any LLM backend, including KoboldCpp, OpenAI and AI Horde☆135Updated this week
- DEPRECATED!☆50Updated last year
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆513Updated this week
- Code for the paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" with LLaMA implementation.☆70Updated 2 years ago
- Web UI for ExLlamaV2☆510Updated 8 months ago
- Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on differe…☆82Updated last week
- 8-bit CUDA functions for PyTorch, ported to HIP for use in AMD GPUs☆51Updated 2 years ago