MegEngine / InferLLMLinks
a lightweight LLM model inference framework
☆730Updated last year
Alternatives and similar repositories for InferLLM
Users that are interested in InferLLM are comparing it to the libraries listed below
Sorting:
- C++ implementation of Qwen-LM☆595Updated 6 months ago
- llm deploy project based mnn. This project has merged into MNN.☆1,593Updated 5 months ago
- llm-export can export llm model to onnx.☆295Updated 5 months ago
- fastllm是后端无依赖的高性能大模型推理库。同时支持张量并行推理稠密模型和混合模式推理MOE模型,任意10G以上显卡即可推理满血DeepSeek。双路9004/9005服务器+单显卡部署DeepSeek满血满精度原版模型,单并发20tps;INT4量化模型单并发30tp…☆3,710Updated last week
- 支持中文场景的的小语言模型 llama2.c-zh☆147Updated last year
- LLaMa/RWKV onnx models, quantization and testcase☆363Updated last year
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆256Updated 3 weeks ago
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆801Updated 3 weeks ago
- Efficient AI Inference & Serving☆471Updated last year
- INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model☆1,529Updated 3 months ago
- ☆128Updated 6 months ago
- ☆168Updated this week
- C++ implementation of ChatGLM-6B & ChatGLM2-6B & ChatGLM3 & GLM4(V)☆2,978Updated 10 months ago
- The official repo of Aquila2 series proposed by BAAI, including pretrained & chat large language models.☆442Updated 8 months ago
- ☆139Updated last year
- [EMNLP 2024 Industry Track] This is the official PyTorch implementation of "LLMC: Benchmarking Large Language Model Quantization with a V…☆492Updated this week
- LLM Inference benchmark☆421Updated 11 months ago
- export llama to onnx☆127Updated 6 months ago
- 中文Mixtral混合专家大模型(Chinese Mixtral MoE LLMs)☆603Updated last year
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆474Updated last year
- Play LLaMA2 (official / 中文版 / INT4 / llama2.cpp) Together! ONLY 3 STEPS! ( non GPU / 5GB vRAM / 8~14GB vRAM)☆541Updated last year
- torch_musa is an open source repository based on PyTorch, which can make full use of the super computing power of MooreThreads graphics c…☆414Updated this week
- Low-bit LLM inference on CPU/NPU with lookup table☆811Updated 3 weeks ago
- TigerBot: A multi-language multi-task LLM☆2,254Updated 6 months ago
- Inferflow is an efficient and highly configurable inference engine for large language models (LLMs).☆242Updated last year
- ☆336Updated last week
- Open Multilingual Chatbot for Everyone☆1,268Updated 2 weeks ago
- ☆124Updated last year
- 计图大模型推理库,具有高性能、配置要求低、中文支持好、可移植等特点☆2,431Updated 4 months ago
- 使用peft库,对chatGLM-6B/chatGLM2-6B实现4bit的QLoRA高效微调,并做lora model和base model的merge及4bit的量化(quantize)。☆360Updated last year