Franc-Z / QWen1.5_TensorRT-LLMLinks
Optimize QWen1.5 models with TensorRT-LLM
☆17Updated last year
Alternatives and similar repositories for QWen1.5_TensorRT-LLM
Users that are interested in QWen1.5_TensorRT-LLM are comparing it to the libraries listed below
Sorting:
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆254Updated this week
- ☆610Updated 10 months ago
- ☆90Updated last year
- ☆332Updated 4 months ago
- export llama to onnx☆124Updated 5 months ago
- FlagScale is a large model toolkit based on open-sourced projects.☆281Updated this week
- ☆139Updated last year
- ☆27Updated 6 months ago
- llm-export can export llm model to onnx.☆293Updated 4 months ago
- ☆127Updated 5 months ago
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆782Updated this week
- ☆49Updated last week
- Transformer related optimization, including BERT, GPT☆59Updated last year
- Inference code for LLaMA models☆121Updated last year
- ☆166Updated this week
- Simple Dynamic Batching Inference☆145Updated 3 years ago
- Transformer related optimization, including BERT, GPT☆39Updated 2 years ago
- LLM Inference benchmark☆419Updated 10 months ago
- ☆52Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆136Updated 5 months ago
- Best practice for training LLaMA models in Megatron-LM☆656Updated last year
- ☆84Updated last year
- Transformer related optimization, including BERT, GPT☆17Updated last year
- text embedding☆146Updated last year
- ☆35Updated last year
- ☆21Updated last year
- ☆79Updated last year
- Compare multiple optimization methods on triton to imporve model service performance☆50Updated last year
- Accelerate inference without tears☆315Updated 2 months ago
- [EMNLP 2024 Industry Track] This is the official PyTorch implementation of "LLMC: Benchmarking Large Language Model Quantization with a V…☆476Updated last week