Franc-Z / QWen1.5_TensorRT-LLM
Optimize QWen1.5 models with TensorRT-LLM
☆17Updated 10 months ago
Alternatives and similar repositories for QWen1.5_TensorRT-LLM:
Users that are interested in QWen1.5_TensorRT-LLM are comparing it to the libraries listed below
- ☆90Updated last year
- Simple Dynamic Batching Inference☆145Updated 3 years ago
- ☆604Updated 8 months ago
- Transformer related optimization, including BERT, GPT☆39Updated 2 years ago
- ☆127Updated 3 months ago
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆240Updated 3 weeks ago
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆678Updated 2 months ago
- ☆324Updated 2 months ago
- FlagScale is a large model toolkit based on open-sourced projects.☆257Updated this week
- ☆158Updated this week
- Community maintained hardware plugin for vLLM on Ascend☆393Updated this week
- Best practice for training LLaMA models in Megatron-LM☆645Updated last year
- ☆33Updated last year
- ☆27Updated 4 months ago
- ☆139Updated 11 months ago
- llm-export can export llm model to onnx.☆274Updated 2 months ago
- ☆52Updated last year
- Compare multiple optimization methods on triton to imporve model service performance☆50Updated last year
- ☆46Updated this week
- export llama to onnx☆118Updated 3 months ago
- text embedding☆144Updated last year
- Inference code for LLaMA models☆118Updated last year
- ☆84Updated last year
- Transformer related optimization, including BERT, GPT☆59Updated last year
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆267Updated last year
- 服务侧深度学习部署案例☆451Updated 5 years ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆132Updated 3 months ago
- ☆33Updated last year
- ☆21Updated last year
- LLM Inference benchmark☆405Updated 8 months ago