dingyuqing05 / trt2022_wenet
☆71Updated 2 years ago
Alternatives and similar repositories for trt2022_wenet:
Users that are interested in trt2022_wenet are comparing it to the libraries listed below
- ☆74Updated 2 years ago
- ☆98Updated 3 years ago
- ☆24Updated last year
- ☆26Updated last year
- Serving Inside Pytorch☆156Updated this week
- simplify >2GB large onnx model☆54Updated 3 months ago
- TensorRT 2022复赛方案: 首个基于Transformer的图像重建模型MST++的TensorRT模型推断优化☆138Updated 2 years ago
- Simple Dynamic Batching Inference☆145Updated 3 years ago
- A Toolkit to Help Optimize Large Onnx Model☆153Updated 10 months ago
- Use PyTorch model in C++ project☆137Updated 3 years ago
- ☆139Updated 10 months ago
- export llama to onnx☆115Updated 2 months ago
- ONNX2Pytorch☆160Updated 3 years ago
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆46Updated last year
- llm deploy project based onnx.☆31Updated 5 months ago
- symmetric int8 gemm☆66Updated 4 years ago
- Whisper inference with TensorRT-LLM☆21Updated last year
- Trans different platform's network to International Representation(IR)☆44Updated 6 years ago
- ☆120Updated last year
- Compare multiple optimization methods on triton to imporve model service performance☆50Updated last year
- Transformer related optimization, including BERT, GPT☆59Updated last year
- MegEngine到其他框架的转换器☆69Updated last year
- ☆58Updated 4 months ago
- Inference of quantization aware trained networks using TensorRT☆80Updated 2 years ago
- ☆95Updated 3 years ago
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆38Updated 7 months ago
- ☆35Updated 5 months ago
- ☆124Updated last year
- Offline Quantization Tools for Deploy.☆124Updated last year
- Transformer related optimization, including BERT, GPT☆17Updated last year