InternLM / lmdeploy
LMDeploy is a toolkit for compressing, deploying, and serving LLMs.
☆5,603Updated this week
Alternatives and similar repositories for lmdeploy:
Users that are interested in lmdeploy are comparing it to the libraries listed below
- An efficient, flexible and full-featured toolkit for fine-tuning LLM (InternLM2, Llama3, Phi3, Qwen, Mistral, ...)☆4,246Updated 3 weeks ago
- OpenCompass is an LLM evaluation platform, supporting a wide range of models (Llama3, Mistral, InternLM2,GPT-4,LLaMa2, Qwen,GLM, Claude, …☆4,681Updated last week
- SGLang is a fast serving framework for large language models and vision language models.☆10,325Updated this week
- An Easy-to-use, Scalable and High-performance RLHF Framework (70B+ PPO Full Tuning & Iterative DPO & LoRA & RingAttention & RFT)☆4,809Updated this week
- The official repo of Qwen-VL (通义千问-VL) chat & pretrained large vision language model proposed by Alibaba Cloud.☆5,470Updated 6 months ago
- Use PEFT or Full-parameter to finetune 450+ LLMs (Qwen2.5, InternLM3, GLM4, Llama3.3, Mistral, Yi1.5, Baichuan2, DeepSeek-R1, ...) and 15…☆5,674Updated this week
- Official release of InternLM series (InternLM, InternLM2, InternLM2.5, InternLM3).☆6,760Updated 2 weeks ago
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆2,911Updated this week
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆4,702Updated 3 weeks ago
- Retrieval and Retrieval-augmented LLMs☆8,555Updated last week
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆1,946Updated last month
- [CVPR 2024 Oral] InternVL Family: A Pioneering Open-Source Alternative to GPT-4o. 接近GPT-4o表现的开源多模态对话模型☆7,059Updated last month
- A framework for few-shot evaluation of language models.☆7,848Updated this week
- a state-of-the-art-level open visual language model | 多模态预训练模型☆6,353Updated 8 months ago
- Agent framework and applications built upon Qwen>=2.0, featuring Function Calling, Code Interpreter, RAG, and Chrome extension.☆5,852Updated 3 weeks ago
- Fast and memory-efficient exact attention☆15,541Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆6,697Updated this week
- PyTorch native post-training library☆4,856Updated this week
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆6,795Updated 7 months ago
- Large Language Model Text Generation Inference☆9,777Updated this week
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆2,743Updated last week
- A high-throughput and memory-efficient inference and serving engine for LLMs☆38,475Updated this week
- TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and build TensorRT engines that contain…☆9,441Updated this week
- Tools for merging pretrained large language models.☆5,260Updated last week
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,426Updated 7 months ago
- 📖A curated list of Awesome LLM/VLM Inference Papers with codes: WINT8/4, Flash-Attention, Paged-Attention, Parallelism, etc. 🎉🎉☆3,456Updated this week
- A lightweight framework for building LLM-based agents☆2,030Updated last week
- Train transformer language models with reinforcement learning.☆11,782Updated this week
- Replace OpenAI GPT with another LLM in your app by changing a single line of code. Xinference gives you the freedom to use any LLM you ne…☆6,428Updated this week
- A blazing fast inference solution for text embeddings models☆3,175Updated 3 weeks ago