pandada8 / llm-inference-benchmarkLinks
LLM 推理服务性能测试
☆42Updated last year
Alternatives and similar repositories for llm-inference-benchmark
Users that are interested in llm-inference-benchmark are comparing it to the libraries listed below
Sorting:
- vLLM Documentation in Chinese Simplified / vLLM 中文文档☆80Updated last month
- ☆109Updated 7 months ago
- ☆168Updated this week
- 怎么训练一个LLM分词器☆150Updated last year
- LLM101n: Let's build a Storyteller 中文版☆131Updated 10 months ago
- Inference code for LLaMA models☆121Updated last year
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆256Updated 3 weeks ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆135Updated 6 months ago
- qwen models finetuning☆99Updated 3 months ago
- Baichuan2代码的逐行解析版本,适合小白☆214Updated last year
- 通义千问VLLM推理部署DEMO☆581Updated last year
- 一些大语言模型和多模态模型的生态,主要包括跨模态搜索、投机解码、QAT量化、多模态量化、ChatBot、OCR☆182Updated this week
- ☆63Updated last year
- 欢迎来到 "LLM-travel" 仓库!探索大语言模型(LLM)的奥秘 🚀。致力于深入理解、探讨以及实现与大模型相关的各种技术、原理和应用。☆328Updated 11 months ago
- 中文大模型微调(LLM-SFT), 数学指令数据集MWP-Instruct, 支持模型(ChatGLM-6B, LLaMA, Bloom-7B, baichuan-7B), 支持(LoRA, QLoRA, DeepSpeed, UI, TensorboardX), 支持(微…☆203Updated last year
- ☆229Updated last year
- 使用单个24G显卡,从0开始训练LLM☆55Updated last month
- InternEvo is an open-sourced lightweight training framework aims to support model pre-training without the need for extensive dependencie…☆393Updated 2 weeks ago
- This is a repository used by individuals to experiment and reproduce the pre-training process of LLM.☆441Updated last month
- Imitate OpenAI with Local Models☆86Updated 10 months ago
- Alpaca Chinese Dataset -- 中文指令微调数据集☆208Updated 8 months ago
- unify-easy-llm(ULM)旨在打造一个简易的一键式大模型训练工具,支持Nvidia GPU、Ascend NPU等不同硬件以及常用的大模型。☆55Updated 11 months ago
- LLaMA Factory Document☆133Updated 2 weeks ago
- LLM Inference benchmark☆421Updated 11 months ago
- 从0开始,将chatgpt的技术路线跑一遍。☆241Updated 9 months ago
- ☆336Updated last week
- ☆339Updated last year
- GOT的vLLM加速实现 并结合 MinerU 实现RAG中的pdf 解析☆58Updated 7 months ago
- ☆30Updated 9 months ago
- Efficient, Flexible, and Highly Fault-Tolerant Model Service Management Based on SGLang☆53Updated 7 months ago