MegEngine / InferLLMLinks
a lightweight LLM model inference framework
☆746Updated last year
Alternatives and similar repositories for InferLLM
Users that are interested in InferLLM are comparing it to the libraries listed below
Sorting:
- C++ implementation of Qwen-LM☆614Updated last year
- llm deploy project based mnn. This project has merged into MNN.☆1,617Updated 11 months ago
- 支持中文场景的的小语言模型 llama2.c-zh☆150Updated last year
- llm-export can export llm model to onnx.☆337Updated 2 months ago
- LLaMa/RWKV onnx models, quantization and testcase☆367Updated 2 years ago
- torch_musa is an open source repository based on PyTorch, which can make full use of the super computing power of MooreThreads graphics c…☆469Updated this week
- export llama to onnx☆137Updated last year
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆272Updated 5 months ago
- fastllm是后端无依赖的高性能大模型推理库。同时支持张量并行推理稠密模型和混合模式推理MOE模型,任意10G以上显卡即可推理满血DeepSeek。双路9004/9005服务器+单显卡部署DeepSeek满血满精度原版模型,单并发20tps;INT4量化模型单并发30tp…☆4,123Updated last month
- Efficient AI Inference & Serving☆480Updated 2 years ago
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆995Updated this week
- LLM Inference benchmark☆430Updated last year
- ☆518Updated this week
- ☆181Updated this week
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆476Updated last year
- 中文Mixtral混合专家大模型(Chinese Mixtral MoE LLMs)☆609Updated last year
- ☆435Updated 3 months ago
- C++ implementation of ChatGLM-6B & ChatGLM2-6B & ChatGLM3 & GLM4(V)☆2,969Updated last year
- Accelerate inference without tears☆371Updated last month
- ☆130Updated last year
- Yuan 2.0 Large Language Model☆690Updated last year
- Ascend PyTorch adapter (torch_npu). Mirror of https://gitee.com/ascend/pytorch☆476Updated this week
- XVERSE-13B: A multilingual large language model developed by XVERSE Technology Inc.☆645Updated last year
- The official repo of Aquila2 series proposed by BAAI, including pretrained & chat large language models.☆445Updated last year
- Large-scale model inference.☆627Updated 2 years ago
- INT4/INT5/INT8 and FP16 inference on CPU for RWKV language model☆1,556Updated 9 months ago
- ☆90Updated 2 years ago
- [EMNLP 2024 & AAAI 2026] A powerful toolkit for compressing large models including LLMs, VLMs, and video generative models.☆652Updated last month
- Efficient Training (including pre-training and fine-tuning) for Big Models☆615Updated 2 months ago
- Low-bit LLM inference on CPU/NPU with lookup table☆906Updated 7 months ago