dingyuqing05 / trt2022_wenet
☆70Updated last year
Related projects ⓘ
Alternatives and complementary repositories for trt2022_wenet
- ☆74Updated 2 years ago
- Serving Inside Pytorch☆142Updated this week
- ☆96Updated 3 years ago
- Simple Dynamic Batching Inference☆145Updated 2 years ago
- ☆26Updated last year
- TensorRT 2022复赛方案: 首个基于Transformer的图像重建模型MST++的TensorRT模型推断优化☆135Updated 2 years ago
- ☆23Updated last year
- Use PyTorch model in C++ project☆135Updated 3 years ago
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆31Updated 2 months ago
- simplify >2GB large onnx model☆42Updated 8 months ago
- A Toolkit to Help Optimize Large Onnx Model☆147Updated 5 months ago
- ☆140Updated 6 months ago
- ONNX2Pytorch☆158Updated 3 years ago
- export llama to onnx☆95Updated 5 months ago
- symmetric int8 gemm☆66Updated 4 years ago
- Whisper inference with TensorRT-LLM☆21Updated last year
- Offline Quantization Tools for Deploy.☆116Updated 10 months ago
- ☆117Updated last year
- Transformer related optimization, including BERT, GPT☆60Updated last year
- ☆56Updated this week
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆44Updated last year
- llm deploy project based onnx.☆26Updated last month
- Compare multiple optimization methods on triton to imporve model service performance☆46Updated 10 months ago
- A quantization algorithm for LLM☆101Updated 4 months ago
- ☆136Updated this week
- ☆93Updated 3 years ago
- ☆123Updated this week
- llm-export can export llm model to onnx.☆226Updated this week
- ☆32Updated 3 weeks ago