amd / gaiaLinks
Run LLM Agents on Ryzen AI PCs in Minutes
☆454Updated 2 weeks ago
Alternatives and similar repositories for gaia
Users that are interested in gaia are comparing it to the libraries listed below
Sorting:
- No-code CLI designed for accelerating ONNX workflows☆201Updated last month
- Local LLM Server with GPU and NPU Acceleration☆206Updated this week
- Lightweight Inference server for OpenVINO☆188Updated this week
- Fully Open Language Models with Stellar Performance☆234Updated last month
- AMD Ryzen™ AI Software includes the tools and runtime libraries for optimizing and deploying AI inference on AMD Ryzen™ AI powered PCs.☆555Updated last week
- The HIP Environment and ROCm Kit - A lightweight open source build system for HIP and ROCm☆222Updated this week
- llama.cpp fork with additional SOTA quants and improved performance☆652Updated this week
- Review/Check GGUF files and estimate the memory usage and maximum tokens per second.☆185Updated this week
- prima.cpp: Speeding up 70B-scale LLM inference on low-resource everyday home clusters☆975Updated this week
- ☆267Updated this week
- Download models from the Ollama library, without Ollama☆89Updated 8 months ago
- Intel® NPU Acceleration Library☆679Updated 2 months ago
- LM inference server implementation based on *.cpp.☆233Updated this week
- Intel® AI Assistant Builder☆87Updated 2 weeks ago
- On-device LLM Inference Powered by X-Bit Quantization☆256Updated last month
- LLM Benchmark for Throughput via Ollama (Local LLMs)☆255Updated 2 weeks ago
- 🌟 Yi-Coder is a series of open-source code language models that delivers state-of-the-art coding performance with fewer than 10 billion …☆416Updated 9 months ago
- ☆430Updated this week
- VS Code extension for LLM-assisted code/text completion☆842Updated 2 weeks ago
- Run Generative AI models with simple C++/Python API and using OpenVINO Runtime☆303Updated this week
- Model swapping for llama.cpp (or any local OpenAPI compatible server)☆1,035Updated 2 weeks ago
- Minimal Linux OS with a Model Context Protocol (MCP) gateway to expose local capabilities to LLMs.☆257Updated 3 weeks ago
- LM Studio Python SDK☆551Updated this week
- ☆118Updated 3 weeks ago
- llama.cpp fork used by GPT4All☆56Updated 4 months ago
- AI Studio is an independent app for utilizing LLMs.☆285Updated this week
- InferX is a Inference Function as a Service Platform☆116Updated 2 weeks ago
- ☆356Updated 3 months ago
- CPU inference for the DeepSeek family of large language models in C++☆308Updated last month
- ☆141Updated last week