In2infinity / tuxedo-amd-npu-driverLinks
World's first AMD NPU driver for TUXEDO laptops - Enable AI acceleration on Linux
☆34Updated 2 months ago
Alternatives and similar repositories for tuxedo-amd-npu-driver
Users that are interested in tuxedo-amd-npu-driver are comparing it to the libraries listed below
Sorting:
- ☆89Updated 3 weeks ago
- Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆280Updated this week
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS over OpenAI endpoints.☆211Updated this week
- Fresh builds of llama.cpp with AMD ROCm™ 7 acceleration☆58Updated this week
- ☆313Updated last week
- ☆178Updated last month
- Local AI voice assistant stack for Home Assistant (GPU-accelerated) with persistent memory, follow-up conversation, and Ollama model reco…☆209Updated 2 months ago
- ☆165Updated last month
- A tool to determine whether or not your PC can run a given LLM☆164Updated 8 months ago
- Input text from speech in any Linux window, the lean, fast and accurate way, using whisper.cpp OFFLINE. Speak with local LLMs via llama.c…☆142Updated 2 months ago
- reddacted lets you analyze & sanitize your online footprint using LLMs, PII detection & sentiment analysis to identify anything that migh…☆108Updated 2 months ago
- A platform to self-host AI on easy mode☆171Updated last week
- AMD APU compatible Ollama. Get up and running with OpenAI gpt-oss, DeepSeek-R1, Gemma 3 and other models.☆108Updated this week
- Tiny truly local voice-activated LLM Agent that runs on a Raspberry Pi☆187Updated last week
- fully local, temporally aware natural language file search on your pc! even without a GPU. find relevant files using natural language i…☆120Updated last week
- 🚀 FlexLLama - Lightweight self-hosted tool for running multiple llama.cpp server instances with OpenAI v1 API compatibility and multi-GP…☆36Updated last week
- ☆54Updated 4 months ago
- ☆83Updated 7 months ago
- Enhancing LLMs with LoRA☆159Updated last month
- A LibreOffice Writer extension that adds local-inference generative AI features.☆140Updated last month
- Lightweight offline Linux command tutor using a local LLM and ChromaDB.☆14Updated 5 months ago
- ☆48Updated 3 months ago
- Llama.cpp runner/swapper and proxy that emulates LMStudio / Ollama backends☆46Updated last month
- GPU Power and Performance Manager☆61Updated 11 months ago
- ☆83Updated this week
- Local LLM Powered Recursive Search & Smart Knowledge Explorer☆254Updated 8 months ago
- ☆48Updated last week
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆211Updated 3 weeks ago
- Lemonade helps users run local LLMs with the highest performance by configuring state-of-the-art inference engines for their NPUs and GPU…☆1,385Updated last week
- A meta-framework for self-improving LLMs with transparent reasoning☆22Updated last month