lemonade-sdk / llamacpp-rocmLinks
Fresh builds of llama.cpp with AMD ROCm™ 7 acceleration
☆58Updated this week
Alternatives and similar repositories for llamacpp-rocm
Users that are interested in llamacpp-rocm are comparing it to the libraries listed below
Sorting:
- Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆280Updated this week
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS over OpenAI endpoints.☆211Updated this week
- ☆178Updated last month
- ☆225Updated 5 months ago
- llama-swap + a minimal ollama compatible api☆26Updated last week
- A daemon that automatically manages the performance states of NVIDIA GPUs.☆95Updated last week
- KoboldCpp Smart Launcher with GPU Layer and Tensor Override Tuning☆28Updated 4 months ago
- ☆89Updated 3 weeks ago
- GPU Power and Performance Manager☆61Updated 11 months ago
- Autonomous, agentic, creative story writing system that incorporates stored embeddings and Knowledge Graphs.☆79Updated this week
- A library and CLI utilities for managing performance states of NVIDIA GPUs.☆29Updated last year
- A persistent local memory for AI, LLMs, or Copilot in VS Code.☆154Updated last week
- ☆165Updated last month
- ☆48Updated 3 months ago
- ☆83Updated this week
- ☆313Updated last week
- llama.cpp fork with additional SOTA quants and improved performance☆1,246Updated this week
- Simple node proxy for llama-server that enables MCP use☆13Updated 5 months ago
- Prometheus exporter for Linux based GDDR6/GDDR6X VRAM and GPU Core Hot spot temperature reader for NVIDIA RTX 3000/4000 series GPUs.☆23Updated last year
- A platform to self-host AI on easy mode☆171Updated last week
- reddacted lets you analyze & sanitize your online footprint using LLMs, PII detection & sentiment analysis to identify anything that migh…☆108Updated 2 months ago
- Privacy-first agentic framework with powerful reasoning & task automation capabilities. Natively distributed and fully ISO 27XXX complian…☆66Updated 6 months ago
- llama.cpp fork with additional SOTA quants and improved performance☆21Updated last week
- Llama.cpp runner/swapper and proxy that emulates LMStudio / Ollama backends☆46Updated last month
- 🚀 FlexLLama - Lightweight self-hosted tool for running multiple llama.cpp server instances with OpenAI v1 API compatibility and multi-GP…☆36Updated last week
- A sleek web interface for Ollama, making local LLM management and usage simple. WebOllama provides an intuitive UI to manage Ollama model…☆56Updated this week
- A open webui function for better R1 experience☆78Updated 7 months ago
- A comprehensive list of document parsers, covering PDF-to-text conversion and layout extraction. Each tested for support of tables, equat…☆156Updated 2 months ago
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆438Updated this week
- A tool to determine whether or not your PC can run a given LLM☆164Updated 8 months ago