eliranwong / MultiAMDGPU_AIDev_UbuntuLinks
Multi AMD GPU Setup for AI Development on Ubuntu with ROCM
☆47Updated 2 weeks ago
Alternatives and similar repositories for MultiAMDGPU_AIDev_Ubuntu
Users that are interested in MultiAMDGPU_AIDev_Ubuntu are comparing it to the libraries listed below
Sorting:
- Inference engine for Intel devices. Serve LLMs, VLMs, Whisper, Kokoro-TTS, Embedding and Rerank models over OpenAI endpoints.☆295Updated this week
- AMD (Radeon GPU) ROCm based setup for popular AI tools on Ubuntu 24.04.1☆217Updated this week
- NVIDIA Linux open GPU with P2P support☆129Updated this week
- A fork of vLLM enabling Pascal architecture GPUs☆32Updated 11 months ago
- GPU Power and Performance Manager☆66Updated last year
- An OpenAI API compatible API for chat with image input and questions about the images. aka Multimodal.☆266Updated 11 months ago
- Local Qwen3 LLM inference. One easy-to-understand file of C source with no dependencies.☆157Updated 7 months ago
- Aggregates compute from spare GPU capacity☆190Updated last week
- A fast batching API to serve LLM models☆189Updated last year
- ☆109Updated 5 months ago
- Distributed Inference for mlx LLm☆100Updated last year
- An optimized quantization and inference library for running LLMs locally on modern consumer-class GPUs☆626Updated 2 weeks ago
- A multimodal, function calling powered LLM webui.☆216Updated last year
- The official API server for Exllama. OAI compatible, lightweight, and fast.☆1,121Updated 3 weeks ago
- Comparison of the output quality of quantization methods, using Llama 3, transformers, GGUF, EXL2.☆165Updated last year
- No-code CLI designed for accelerating ONNX workflows☆227Updated 8 months ago
- AI stack for interacting with LLMs, Stable Diffusion, Whisper, xTTS and many other AI models☆168Updated last year
- vLLM: A high-throughput and memory-efficient inference and serving engine for LLMs☆93Updated this week
- The main repository for building Pascal-compatible versions of ML applications and libraries.☆169Updated 5 months ago
- The llama-cpp-agent framework is a tool designed for easy interaction with Large Language Models (LLMs). Allowing users to chat with LLM …☆615Updated 11 months ago
- ☆209Updated last month
- LM inference server implementation based on *.cpp.☆295Updated 2 months ago
- automatically quant GGUF models☆219Updated last month
- LLM Inference on consumer devices☆129Updated 10 months ago
- Fully-featured, beautiful web interface for vLLM - built with NextJS.☆172Updated last month
- vLLM for AMD gfx906 GPUs, e.g. Radeon VII / MI50 / MI60☆370Updated last month
- REAP: Router-weighted Expert Activation Pruning for SMoE compression☆232Updated 2 months ago
- Open source LLM UI, compatible with all local LLM providers.☆177Updated last year
- llama.cpp fork with additional SOTA quants and improved performance☆1,605Updated this week
- llmbasedos — Local-First OS Where Your AI Agents Wake Up and Work☆282Updated last month