dingyuqing05 / trt2022_wenetLinks
☆72Updated 2 years ago
Alternatives and similar repositories for trt2022_wenet
Users that are interested in trt2022_wenet are comparing it to the libraries listed below
Sorting:
- ☆76Updated 3 years ago
- Serving Inside Pytorch☆163Updated last week
- ☆99Updated 4 years ago
- simplify >2GB large onnx model☆63Updated 10 months ago
- Use PyTorch model in C++ project☆140Updated 4 years ago
- Simple Dynamic Batching Inference☆146Updated 3 years ago
- ☆26Updated 2 years ago
- TensorRT 2022复赛方案: 首个基于Transformer的图像重建模型MST++的TensorRT模型推断优化☆143Updated 3 years ago
- A Toolkit to Help Optimize Large Onnx Model☆160Updated last year
- export llama to onnx☆136Updated 9 months ago
- ☆26Updated 2 years ago
- Whisper inference with TensorRT-LLM☆22Updated 2 years ago
- symmetric int8 gemm☆67Updated 5 years ago
- ☆90Updated 2 years ago
- ONNX2Pytorch☆164Updated 4 years ago
- 关于自建AI推理引擎的手册,从0开始你需要知道的所有事情☆270Updated 3 years ago
- llm-export can export llm model to onnx.☆313Updated last month
- TensorRT Plugin Autogen Tool☆367Updated 2 years ago
- Run Chinese MobileBert model on SNPE.☆15Updated 2 years ago
- Transformer related optimization, including BERT, GPT☆59Updated 2 years ago
- ☆140Updated last year
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆50Updated last year
- Triton Inferece Server Model Config and Client Scripts☆32Updated 3 years ago
- ☆120Updated 2 years ago
- ☆59Updated 10 months ago
- ☆125Updated last year
- PyTorch Quantization Aware Training Example☆140Updated last year
- Compare multiple optimization methods on triton to imporve model service performance☆53Updated last year
- Transformer related optimization, including BERT, GPT☆17Updated 2 years ago
- Models and examples built with OneFlow☆100Updated 11 months ago