LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
☆7,836Apr 29, 2026Updated this week
Alternatives and similar repositories for lmdeploy
Users that are interested in lmdeploy are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- SGLang is a high-performance serving framework for large language models and multimodal models.☆26,832Updated this week
- A Next-Generation Training Engine Built for Ultra-Large MoE Models☆5,127Updated this week
- Official release of InternLM series (InternLM, InternLM2, InternLM2.5, InternLM3).☆7,199Oct 30, 2025Updated 6 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆78,979Updated this week
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆13,545Updated this week
- Managed hosting for WordPress and PHP on Cloudways • AdManaged hosting for WordPress, Magento, Laravel, or PHP apps, on multiple cloud providers. Deploy in minutes on Cloudways by DigitalOcean.
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆4,036Updated this week
- A lightweight framework for building LLM-based agents☆2,243Updated this week
- FlashInfer: Kernel Library for LLM Serving☆5,544Updated this week
- Fast and memory-efficient exact attention☆23,628Updated this week
- OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, …☆6,959Apr 20, 2026Updated 2 weeks ago
- Large Language Model Text Generation Inference☆10,848Mar 21, 2026Updated last month
- Transformer related optimization, including BERT, GPT☆6,415Mar 27, 2024Updated 2 years ago
- Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs (Qwen3.6, DeepSeek-R1, GLM-5.1, InternLM3, Llama4, ...) and 300+ MLLMs (Qwen3-VL…☆13,977Updated this week
- Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)☆70,777Updated this week
- Deploy open-source AI quickly and easily - Special Bonus Offer • AdRunpod Hub is built for open source. One-click deployment and autoscaling endpoints without provisioning your own infrastructure.
- [CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型☆10,003Sep 22, 2025Updated 7 months ago
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆5,059Apr 11, 2025Updated last year
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆5,242Updated this week
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,521Jul 17, 2025Updated 9 months ago
- Enhance LLM agents with rich tool APIs☆411Sep 13, 2024Updated last year
- fastllm是后端无依赖的高性能大模型推 理库。同时支持张量并行推理稠密模型和混合模式推理MOE模型,任意10G以上显卡即可推理满血DeepSeek。双路9004/9005服务器+单显卡部署DeepSeek满血满精度原版模型,单并发20tps;INT4量化模型单并发30tp…☆4,218Updated this week
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,333May 11, 2025Updated 11 months ago
- Retrieval and Retrieval-augmented LLMs☆11,642Apr 22, 2026Updated last week
- An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.☆39,463Updated this week
- Deploy on Railway without the complexity - Free Credits Offer • AdConnect your repo and Railway handles the rest with instant previews. Quickly provision container image services, databases, and storage volumes.
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,110Jun 30, 2025Updated 10 months ago
- Swap GPT for any LLM by changing a single line of code. Xinference lets you run open-source, speech, and multimodal models on cloud, on-p…☆9,281Updated this week
- Ongoing research training transformer models at scale☆16,203Updated this week
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,730Jun 25, 2024Updated last year
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆42,231Updated this week
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆1,106Updated this week
- InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System for Long-term Streaming Video and Audio Interactions☆2,924May 26, 2025Updated 11 months ago
- [ACL2024 Findings] Agent-FLAN: Designing Data and Methods of Effective Agent Tuning for Large Language Models☆360Mar 22, 2024Updated 2 years ago
- 📚A curated list of Awesome LLM/VLM Inference Papers with Codes: Flash-Attention, Paged-Attention, WINT8/4, Parallelism, etc.🎉☆5,185Apr 20, 2026Updated 2 weeks ago
- Virtual machines for every use case on DigitalOcean • AdGet dependable uptime with 99.99% SLA, simple security tools, and predictable monthly pricing with DigitalOcean's virtual machines, called Droplets.
- Accessible large language models via k-bit quantization for PyTorch.☆8,168Apr 20, 2026Updated 2 weeks ago
- Universal LLM Deployment Engine with ML Compilation☆22,557Apr 22, 2026Updated last week
- verl/HybridFlow: A Flexible and Efficient RL Post-Training Framework☆21,046Updated this week
- Development repository for the Triton language and compiler☆19,087Updated this week
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆10,625Updated this week
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & VLM & TIS & vLLM & Ray & Asy…☆9,441Updated this week
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆24,753Aug 12, 2024Updated last year