Eddie-Wang1120 / Eddie-Wang-Hackathon2023
Whisper inference with TensorRT-LLM
☆21Updated last year
Related projects ⓘ
Alternatives and complementary repositories for Eddie-Wang-Hackathon2023
- ☆70Updated last year
- ☆74Updated 2 years ago
- export llama to onnx☆97Updated 5 months ago
- ☆140Updated 7 months ago
- ASR client for Triton ASR Service☆19Updated last month
- llm-export can export llm model to onnx.☆231Updated last week
- ☆124Updated 2 weeks ago
- Transformer related optimization, including BERT, GPT☆60Updated last year
- A quantization algorithm for LLM☆101Updated 5 months ago
- Transformer related optimization, including BERT, GPT☆17Updated last year
- symmetric int8 gemm☆66Updated 4 years ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆29Updated 2 months ago
- LLaMa/RWKV onnx models, quantization and testcase☆353Updated last year
- ☆57Updated this week
- Serving Inside Pytorch☆145Updated this week
- Simple Dynamic Batching Inference☆145Updated 2 years ago
- A ctc decoder for both online and offline asr model☆58Updated last year
- ☆138Updated 2 weeks ago
- 使用 cutlass 实现 flash-attention 精简版,具有教学意义☆32Updated 3 months ago
- simplify >2GB large onnx model☆44Updated 8 months ago
- A Toolkit to Help Optimize Large Onnx Model☆149Updated 6 months ago
- ☢ ️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆44Updated last year
- Kaldi-compatible online fbank extractor without external dependencies☆80Updated 3 weeks ago
- ☆32Updated 9 months ago
- List of Large Lanugage Model Papers☆55Updated last year
- PaddleSpeech TTS cpp☆35Updated last year
- Use PyTorch model in C++ project☆135Updated 3 years ago
- A Toolkit to Help Optimize Onnx Model☆81Updated this week
- ☆30Updated 3 years ago
- ☆32Updated last month