amd / gaiaLinks
Run LLM Agents on Ryzen AI PCs in Minutes
☆843Updated last week
Alternatives and similar repositories for gaia
Users that are interested in gaia are comparing it to the libraries listed below
Sorting:
- Run LLMs on AMD Ryzen™ AI NPUs in minutes. Just like Ollama - but purpose-built and deeply optimized for the AMD NPUs.☆591Updated 2 weeks ago
- Lemonade helps users discover and run local AI apps by serving optimized LLMs right from their own GPUs and NPUs. Join our discord: https…☆1,985Updated this week
- AMD Ryzen™ AI Software includes the tools and runtime libraries for optimizing and deploying AI inference on AMD Ryzen™ AI powered PCs.☆727Updated 3 weeks ago
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆270Updated last week
- No-code CLI designed for accelerating ONNX workflows☆221Updated 7 months ago
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆690Updated last week
- ☆181Updated 2 months ago
- llama.cpp fork with additional SOTA quants and improved performance☆1,455Updated this week
- ☆512Updated this week
- ☆740Updated this week
- Fresh builds of llama.cpp with AMD ROCm™ 7 acceleration☆159Updated this week
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆216Updated last month
- Reliable model swapping for any local OpenAI/Anthropic compatible server - llama.cpp, vllm, etc☆2,147Updated last week
- Docs for GGUF quantization (unofficial)☆347Updated 5 months ago
- VS Code extension for LLM-assisted code/text completion☆1,124Updated this week
- Fully Open Language Models with Stellar Performance☆312Updated last month
- LLM Benchmark for Throughput via Ollama (Local LLMs)☆322Updated 2 weeks ago
- llama.cpp fork with additional SOTA quants and improved performance☆21Updated 2 weeks ago
- ☆717Updated this week
- Welcome to the official repository of SINQ! A novel, fast and high-quality quantization method designed to make any Large Language Model …☆585Updated 2 weeks ago
- Aggregates compute from spare GPU capacity☆183Updated last week
- Proxy that allows you to use ollama as a copilot like Github copilot☆798Updated 4 months ago
- Intel® AI Assistant Builder☆140Updated this week
- Download models from the Ollama library, without Ollama☆119Updated last year
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆616Updated this week
- Review/Check GGUF files and estimate the memory usage and maximum tokens per second.☆223Updated this week
- ☆115Updated this week
- MLPerf Client is a benchmark for Windows, Linux and macOS, focusing on client form factors in ML inference scenarios.☆67Updated last month
- Intel® NPU Acceleration Library☆700Updated 8 months ago
- Make PyTorch models at least run on APUs.☆56Updated 2 years ago