LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
☆7,694Mar 13, 2026Updated last week
Alternatives and similar repositories for lmdeploy
Users that are interested in lmdeploy are comparing it to the libraries listed below
Sorting:
- SGLang is a high-performance serving framework for large language models and multimodal models.☆24,455Updated this week
- A Next-Generation Training Engine Built for Ultra-Large MoE Models☆5,104Updated this week
- Official release of InternLM series (InternLM, InternLM2, InternLM2.5, InternLM3).☆7,172Oct 30, 2025Updated 4 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆73,479Updated this week
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆13,120Updated this week
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆3,945Mar 13, 2026Updated last week
- A lightweight framework for building LLM-based agents☆2,231Updated this week
- FlashInfer: Kernel Library for LLM Serving☆5,145Updated this week
- Fast and memory-efficient exact attention☆22,832Updated this week
- OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, …☆6,765Updated this week
- Large Language Model Text Generation Inference☆10,803Jan 8, 2026Updated 2 months ago
- Transformer related optimization, including BERT, GPT☆6,397Mar 27, 2024Updated last year
- Use PEFT or Full-parameter to CPT/SFT/DPO/GRPO 600+ LLMs (Qwen3.5, DeepSeek-R1, GLM-5, InternLM3, Llama4, ...) and 300+ MLLMs (Qwen3-VL, …☆13,120Updated this week
- Unified Efficient Fine-Tuning of 100+ LLMs & VLMs (ACL 2024)☆68,728Updated this week
- [CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型☆9,879Sep 22, 2025Updated 5 months ago
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆5,034Apr 11, 2025Updated 11 months ago
- Mooncake is the serving platform for Kimi, a leading LLM service provided by Moonshot AI.☆4,921Updated this week
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,463Jul 17, 2025Updated 8 months ago
- Enhance LLM agents with rich tool APIs☆404Sep 13, 2024Updated last year
- fastllm是后端无依赖的高性能大模型推理库。同时支持张量并行推理稠密模型和混合模式推理MOE模型,任意10G以上显卡即可推理满血DeepSeek。双路9004/9005服务器+单显卡部署DeepSeek满血满精度原版模型,单并发20tps;INT4量化模型单并发30tp…☆4,173Mar 12, 2026Updated last week
- Retrieval and Retrieval-augmented LLMs☆11,410Mar 10, 2026Updated last week
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,317May 11, 2025Updated 10 months ago
- An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.☆39,428Jun 2, 2025Updated 9 months ago
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,101Jun 30, 2025Updated 8 months ago
- Swap GPT for any LLM by changing a single line of code. Xinference lets you run open-source, speech, and multimodal models on cloud, on-p…☆9,134Updated this week
- Ongoing research training transformer models at scale☆15,647Updated this week
- DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.☆41,807Mar 13, 2026Updated last week
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,719Jun 25, 2024Updated last year
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆1,070Updated this week
- InternLM-XComposer2.5-OmniLive: A Comprehensive Multimodal System for Long-term Streaming Video and Audio Interactions☆2,923May 26, 2025Updated 9 months ago
- [ACL2024 Findings] Agent-FLAN: Designing Data and Methods of Effective Agent Tuning for Large Language Models☆359Mar 22, 2024Updated last year
- 📚A curated list of Awesome LLM/VLM Inference Papers with Codes: Flash-Attention, Paged-Attention, WINT8/4, Parallelism, etc.🎉☆5,062Updated this week
- verl: Volcano Engine Reinforcement Learning for LLMs☆19,919Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆8,052Updated this week
- Universal LLM Deployment Engine with ML Compilation☆22,194Mar 9, 2026Updated last week
- Development repository for the Triton language and compiler☆18,656Updated this week
- An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO & REINFORCE++ & TIS & vLLM & Ray & Async RL)☆9,191Updated this week
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆10,426Mar 13, 2026Updated last week
- [NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.☆24,578Aug 12, 2024Updated last year