amd / gaiaLinks
Run LLM Agents on Ryzen AI PCs in Minutes
☆766Updated last week
Alternatives and similar repositories for gaia
Users that are interested in gaia are comparing it to the libraries listed below
Sorting:
- Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆472Updated this week
- Lemonade helps users run local LLMs with the highest performance by configuring state-of-the-art inference engines for their NPUs and GPU…☆1,752Updated this week
- AMD Ryzen™ AI Software includes the tools and runtime libraries for optimizing and deploying AI inference on AMD Ryzen™ AI powered PCs.☆695Updated last week
- AI PC starter app for doing AI image creation, image stylizing, and chatbot on a PC powered by an Intel® Arc™ GPU.☆665Updated 2 weeks ago
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆580Updated this week
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆247Updated 3 weeks ago
- No-code CLI designed for accelerating ONNX workflows☆216Updated 5 months ago
- llama.cpp fork with additional SOTA quants and improved performance☆1,341Updated this week
- ☆582Updated this week
- ☆495Updated this week
- Docs for GGUF quantization (unofficial)☆319Updated 4 months ago
- ☆417Updated 7 months ago
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆216Updated this week
- Reliable model swapping for any local OpenAI compatible server - llama.cpp, vllm, etc☆1,933Updated this week
- Download models from the Ollama library, without Ollama☆115Updated last year
- ☆491Updated this week
- Welcome to the official repository of SINQ! A novel, fast and high-quality quantization method designed to make any Large Language Model …☆578Updated this week
- Fresh builds of llama.cpp with AMD ROCm™ 7 acceleration☆112Updated last week
- VS Code extension for LLM-assisted code/text completion☆1,062Updated last week
- ☆146Updated last month
- Review/Check GGUF files and estimate the memory usage and maximum tokens per second.☆216Updated 3 months ago
- Fully Open Language Models with Stellar Performance☆303Updated 2 weeks ago
- ☆139Updated 2 months ago
- Intel® NPU (Neural Processing Unit) Driver☆341Updated last month
- Intel® AI Assistant Builder☆128Updated this week
- LM inference server implementation based on *.cpp.☆292Updated this week
- 🌟 Yi-Coder is a series of open-source code language models that delivers state-of-the-art coding performance with fewer than 10 billion …☆432Updated last year
- LLM Benchmark for Throughput via Ollama (Local LLMs)☆312Updated 3 months ago
- ☆78Updated this week
- Intel® NPU Acceleration Library☆697Updated 7 months ago