amd / gaiaLinks
Run LLM Agents on Ryzen AI PCs in Minutes
☆587Updated last week
Alternatives and similar repositories for gaia
Users that are interested in gaia are comparing it to the libraries listed below
Sorting:
- Lemonade helps users run local LLMs with the highest performance by configuring state-of-the-art inference engines for their NPUs and GPU…☆1,290Updated last week
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆398Updated this week
- AI PC starter app for doing AI image creation, image stylizing, and chatbot on a PC powered by an Intel® Arc™ GPU.☆612Updated last week
- No-code CLI designed for accelerating ONNX workflows☆214Updated 3 months ago
- AMD Ryzen™ AI Software includes the tools and runtime libraries for optimizing and deploying AI inference on AMD Ryzen™ AI powered PCs.☆641Updated last month
- ☆257Updated last week
- Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆207Updated this week
- Lightweight Inference server for OpenVINO☆211Updated this week
- ☆468Updated this week
- Fully Open Language Models with Stellar Performance☆247Updated last month
- llama.cpp fork with additional SOTA quants and improved performance☆1,198Updated this week
- Intel® AI Assistant Builder☆106Updated this week
- VS Code extension for LLM-assisted code/text completion☆967Updated this week
- Docs for GGUF quantization (unofficial)☆261Updated 2 months ago
- Download models from the Ollama library, without Ollama☆100Updated 10 months ago
- ☆338Updated this week
- ☆76Updated 2 weeks ago
- LM inference server implementation based on *.cpp.☆273Updated last month
- ☆395Updated 5 months ago
- LLM Benchmark for Throughput via Ollama (Local LLMs)☆295Updated last month
- Onboarding documentation source for the AMD Ryzen™ AI Software Platform. The AMD Ryzen™ AI Software Platform enables developers to take…☆78Updated this week
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆210Updated last week
- ☆124Updated last month
- AI Studio is an independent app for utilizing LLMs.☆304Updated 3 weeks ago
- Model swapping for llama.cpp (or any local OpenAI API compatible server)☆1,579Updated this week
- Review/Check GGUF files and estimate the memory usage and maximum tokens per second.☆207Updated last month
- MLPerf Client is a benchmark for Windows and macOS, focusing on client form factors in ML inference scenarios.☆51Updated last month
- vLLM for AMD gfx906 GPUs, e.g. Radeon VII / MI50 / MI60☆257Updated this week
- LM Studio Python SDK☆632Updated 2 weeks ago
- Intel® NPU Acceleration Library☆691Updated 5 months ago