Calculate token/s & GPU memory requirement for any LLM. Supports llama.cpp/ggml/bnb/QLoRA quantization
☆1,399Dec 3, 2024Updated last year
Alternatives and similar repositories for gpu_poor
Users that are interested in gpu_poor are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- SGLang is a high-performance serving framework for large language models and multimodal models.☆26,832Updated this week
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆7,823Updated this week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆78,385Updated this week
- High-speed Large Language Model Serving for Local Deployment☆9,390Jan 24, 2026Updated 3 months ago
- Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)☆70,777Updated this week
- Serverless GPU API endpoints on Runpod - Get Bonus Credits • AdSkip the infrastructure headaches. Auto-scaling, pay-as-you-go, no-ops approach lets you focus on innovating your application.
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆13,487Updated this week
- FlashInfer: Kernel Library for LLM Serving☆5,498Apr 25, 2026Updated last week
- An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.☆39,461Jun 2, 2025Updated 11 months ago
- Universal LLM Deployment Engine with ML Compilation☆22,557Apr 22, 2026Updated last week
- A framework for few-shot evaluation of language models.☆12,331Apr 22, 2026Updated last week
- Large Language Model Text Generation Inference☆10,848Mar 21, 2026Updated last month
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆4,036Updated this week
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,222Jul 11, 2024Updated last year
- Go ahead and axolotl questions☆11,779Updated this week
- Deploy to Railway using AI coding agents - Free Credits Offer • AdUse Claude Code, Codex, OpenCode, and more. Autonomous software development now has the infrastructure to match with Railway.
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,954May 3, 2024Updated last year
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,108Jun 30, 2025Updated 10 months ago
- Accessible large language models via k-bit quantization for PyTorch.☆8,168Apr 20, 2026Updated last week
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,727Jun 25, 2024Updated last year
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆3,764May 21, 2025Updated 11 months ago
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆5,053Apr 11, 2025Updated last year
- Web UI for training and running open models like Gemma 4, Qwen3.6, DeepSeek, gpt-oss locally.☆63,070Updated this week
- Fast and memory-efficient exact attention☆23,563Updated this week
- Swap GPT for any LLM by changing a single line of code. Xinference lets you run open-source, speech, and multimodal models on cloud, on-p…☆9,268Updated this week
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆5,242Updated this week
- Compare different hardware platforms via the Roofline Model for LLM inference tasks.☆119Mar 13, 2024Updated 2 years ago
- Serving multiple LoRA finetuned LLM as one☆1,155May 8, 2024Updated last year
- A Next-Generation Training Engine Built for Ultra-Large MoE Models☆5,127Updated this week
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆42,231Updated this week
- Train transformer language models with reinforcement learning.☆18,193Updated this week
- ⚡ Build your chatbot within minutes on your favorite device; offer SOTA compression techniques for LLMs; run LLMs efficiently on Intel Pl…☆2,178Oct 8, 2024Updated last year
- LLMs build upon Evol Insturct: WizardLM, WizardCoder, WizardMath☆9,481Jun 7, 2025Updated 10 months ago
- LlamaIndex is the leading document agent and OCR platform☆48,997Updated this week
- End-to-end encrypted cloud storage - Proton Drive • AdSpecial offer: 40% Off Yearly / 80% Off First Month. Protect your most important files, photos, and documents from prying eyes.
- Structured Outputs☆13,741Apr 16, 2026Updated 2 weeks ago
- A fast inference library for running LLMs locally on modern consumer-class GPUs☆4,511Mar 4, 2026Updated last month
- Tools for merging pretrained large language models.☆7,023Mar 15, 2026Updated last month
- LLM inference in C/C++☆106,639Updated this week
- YaRN: Efficient Context Window Extension of Large Language Models☆1,708Apr 17, 2024Updated 2 years ago
- Welcome to the Llama Cookbook! This is your go to guide for Building with Llama: Getting started with Inference, Fine-Tuning, RAG. We als…☆18,309Apr 21, 2026Updated last week
- 20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.☆13,326Apr 25, 2026Updated last week