Multiple NVIDIA GPUs or Apple Silicon for Large Language Model Inference?
☆1,916May 13, 2024Updated last year
Alternatives and similar repositories for GPU-Benchmarks-on-LLM-Inference
Users that are interested in GPU-Benchmarks-on-LLM-Inference are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,514Mar 4, 2026Updated 2 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆78,979Updated this week
- LLM inference in C/C++☆107,892May 2, 2026Updated last week
- Large-scale LLM inference engine☆1,719Updated this week
- Python bindings for llama.cpp☆10,264May 3, 2026Updated last week
- Wordpress hosting with auto-scaling - Free Trial Offer • AdFully Managed hosting for WordPress and WooCommerce businesses that need reliable, auto-scalable performance. Cloudways SafeUpdates now available.
- Web UI for training and running open models like Gemma 4, Qwen3.6, DeepSeek, gpt-oss locally.☆63,536Updated this week
- Web UI for ExLlamaV2☆511Feb 5, 2025Updated last year
- SGLang is a high-performance serving framework for large language models and multimodal models.☆27,516Updated this week
- High-speed Large Language Model Serving for Local Deployment☆9,423Jan 24, 2026Updated 3 months ago
- Run frontier AI locally.☆44,293May 1, 2026Updated last week
- Open-source desktop app for local LLMs. Text, vision, tool-calling, OpenAI/Anthropic-compatible API.☆46,931Updated this week
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,836Apr 29, 2026Updated last week
- Large Language Model Text Generation Inference☆10,854Mar 21, 2026Updated last month
- Python SDK, Proxy Server (AI Gateway) to call 100+ LLM APIs in OpenAI (or native) format, with cost tracking, guardrails, loadbalancing a…☆45,804Updated this week
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.☆2,915Sep 30, 2023Updated 2 years ago
- LLM Inference benchmark☆437Jul 23, 2024Updated last year
- Universal LLM Deployment Engine with ML Compilation☆22,598Apr 22, 2026Updated 2 weeks ago
- Get up and running with Kimi-K2.5, GLM-5, MiniMax, DeepSeek, gpt-oss, Qwen, Gemma and other models.☆170,820Updated this week
- User-friendly AI Interface (Supports Ollama, OpenAI API, ...)☆135,272May 1, 2026Updated last week
- Distribute and run LLMs with a single file.☆24,349May 1, 2026Updated last week
- Examples in the MLX framework☆8,567Apr 6, 2026Updated last month
- MLX: An array framework for Apple silicon☆25,969Updated this week
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆13,545Updated this week
- 1-Click AI Models by DigitalOcean Gradient • AdDeploy popular AI models on DigitalOcean Gradient GPU virtual machines with just a single click. Zero configuration with optimized deployments.
- LlamaIndex is the leading document agent and OCR platform☆49,127Updated this week
- Go ahead and axolotl questions☆11,842May 1, 2026Updated last week
- Tensor library for machine learning☆14,594Updated this week
- An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.☆39,463May 1, 2026Updated last week
- aider is AI pair programming in your terminal☆44,510Apr 25, 2026Updated 2 weeks ago
- A Flexible Framework for Experiencing Heterogeneous LLM Inference/Fine-tune Optimizations☆17,107May 3, 2026Updated last week
- Fast and memory-efficient exact attention☆23,628May 3, 2026Updated last week
- Fast, flexible LLM inference☆7,103Apr 15, 2026Updated 3 weeks ago
- Distributed LLM inference. Connect home devices into a powerful cluster to accelerate LLM inference. More devices means faster inference.☆2,927Apr 14, 2026Updated 3 weeks ago
- End-to-end encrypted email - Proton Mail • AdSpecial offer: 40% Off Yearly / 80% Off First Month. All Proton services are open source and independently audited for security.
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆5,059Apr 11, 2025Updated last year
- Stop configuring your AI stack. Start using it. One command brings a complete pre-wired LLM stack with hundreds of services to explore.☆2,902Updated this week
- A framework for few-shot evaluation of language models.☆12,411Apr 30, 2026Updated last week
- Run GGUF models easily with a KoboldAI UI. One File. Zero Install.☆10,387May 3, 2026Updated last week
- Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)☆70,969Updated this week
- The official API server for Exllama. OAI compatible, lightweight, and fast.☆1,205May 2, 2026Updated last week
- Structured Outputs☆13,776Apr 16, 2026Updated 3 weeks ago