sasha0552 / nvidia-pstatedLinks
A daemon that automatically manages the performance states of NVIDIA GPUs.
☆110Updated 2 months ago
Alternatives and similar repositories for nvidia-pstated
Users that are interested in nvidia-pstated are comparing it to the libraries listed below
Sorting:
- GPU Power and Performance Manager☆66Updated last year
- ☆90Updated last month
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆622Updated this week
- Prometheus exporter for Linux based GDDR6/GDDR6X VRAM and GPU Core Hot spot temperature reader for NVIDIA RTX 3000/4000 series GPUs.☆24Updated last year
- ☆230Updated 8 months ago
- KoboldCpp Smart Launcher with GPU Layer and Tensor Override Tuning☆30Updated 8 months ago
- SLOP Detector and analyzer based on dictionary for shareGPT JSON and text☆80Updated last month
- Open source LLM UI, compatible with all local LLM providers.☆177Updated last year
- Your Trusty Memory-enabled AI Companion - Simple RAG chatbot optimized for local LLMs | 12 Languages Supported | OpenAI API Compatible☆346Updated 11 months ago
- A local AI companion that uses a collection of free, open source AI models in order to create two virtual companions that will follow you…☆241Updated 3 months ago
- A library and CLI utilities for managing performance states of NVIDIA GPUs.☆33Updated last year
- The official API server for Exllama. OAI compatible, lightweight, and fast.☆1,119Updated last week
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆290Updated this week
- Web UI for ExLlamaV2☆513Updated 11 months ago
- A manual for helping using tesla p40 gpu☆142Updated last year
- Core, Junction, and VRAM temperature reader for Linux + GDDR6/GDDR6X GPUs☆69Updated 3 months ago
- An OpenAI API compatible API for chat with image input and questions about the images. aka Multimodal.☆266Updated 10 months ago
- AI stack for interacting with LLMs, Stable Diffusion, Whisper, xTTS and many other AI models☆168Updated last year
- Code execution utilities for Open WebUI & Ollama☆318Updated last year
- Run multiple resource-heavy Large Models (LM) on the same machine with limited amount of VRAM/other resources by exposing them on differe…☆88Updated this week
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.☆165Updated last year
- My personal fork of koboldcpp where I hack in experimental samplers.☆44Updated last year
- Fresh builds of llama.cpp with AMD ROCm™ 7 acceleration☆176Updated this week
- A tool to determine whether or not your PC can run a given LLM☆167Updated last year
- ☆83Updated 11 months ago
- This extension enhances the capabilities of textgen-webui by integrating advanced vision models, allowing users to have contextualized co…☆57Updated last year
- Produce your own Dynamic 3.0 Quants and achieve optimum accuracy & SOTA quantization performance! Input your VRAM and RAM and the toolcha…☆76Updated this week
- Writing Extension for Text Generation WebUI☆64Updated 5 months ago
- Y'all thought the dead internet theory wasn't real, but HERE IT IS☆208Updated last year
- LLM Frontend in a single html file☆692Updated last month