amd / gaiaLinks
Run LLM Agents on Ryzen AI PCs in Minutes
☆421Updated last week
Alternatives and similar repositories for gaia
Users that are interested in gaia are comparing it to the libraries listed below
Sorting:
- No-code CLI designed for accelerating ONNX workflows☆198Updated 2 weeks ago
- Lightweight Inference server for OpenVINO☆187Updated last week
- Local LLM Server with GPU and NPU Acceleration☆138Updated this week
- ☆541Updated last month
- llama.cpp fork with additional SOTA quants and improved performance☆608Updated this week
- ☆425Updated this week
- AI PC starter app for doing AI image creation, image stylizing, and chatbot on a PC powered by an Intel® Arc™ GPU.☆552Updated last week
- Fully Open Language Models with Stellar Performance☆231Updated 2 weeks ago
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆177Updated this week
- LLM Benchmark for Throughput via Ollama (Local LLMs)☆244Updated 3 weeks ago
- Intel® NPU Acceleration Library☆680Updated 2 months ago
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆295Updated this week
- Big & Small LLMs working together☆994Updated this week
- CPU inference for the DeepSeek family of large language models in C++☆302Updated 3 weeks ago
- LM Studio Python SDK☆487Updated 3 weeks ago
- Low-bit LLM inference on CPU/NPU with lookup table☆811Updated 3 weeks ago
- Intel® NPU (Neural Processing Unit) Driver☆275Updated last month
- ☆234Updated this week
- InferX is a Inference Function as a Service Platform☆111Updated last week
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆205Updated 4 months ago
- Onboarding documentation source for the AMD Ryzen™ AI Software Platform. The AMD Ryzen™ AI Software Platform enables developers to take…☆66Updated this week
- Sparse Inferencing for transformer based LLMs☆183Updated this week
- Model swapping for llama.cpp (or any local OpenAPI compatible server)☆969Updated this week
- Local LLM Powered Recursive Search & Smart Knowledge Explorer☆243Updated 4 months ago
- AMD related optimizations for transformer models☆79Updated 7 months ago
- Download models from the Ollama library, without Ollama☆86Updated 7 months ago
- AI Tensor Engine for ROCm☆208Updated this week
- ☆340Updated 2 months ago
- ☆111Updated last week
- An innovative library for efficient LLM inference via low-bit quantization☆349Updated 9 months ago