TRT2022 / trtllm-llama
☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化
☆46Updated last year
Alternatives and similar repositories for trtllm-llama:
Users that are interested in trtllm-llama are comparing it to the libraries listed below
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭 建及优化☆41Updated last year
- ☆24Updated last year
- simplify >2GB large onnx model☆54Updated 3 months ago
- ☆58Updated 3 months ago
- 使用 CUDA C++ 实现的 llama 模型推理框架☆48Updated 4 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆35Updated 2 weeks ago
- A light llama-like llm inference framework based on the triton kernel.☆96Updated this week
- Transformer related optimization, including BERT, GPT☆17Updated last year
- llm deploy project based onnx.☆31Updated 5 months ago
- export llama to onnx☆115Updated 2 months ago
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆34Updated this week
- ☆139Updated 10 months ago
- ☆127Updated 2 months ago
- 天池 NVIDIA TensorRT Hackathon 2023 —— 生成式AI模型优化赛 初赛第三名方案☆48Updated last year
- Serving Inside Pytorch☆155Updated this week
- 大模型部署实战:TensorRT-LLM, Triton Inference Server, vLLM☆26Updated last year
- ☆35Updated 5 months ago
- ☆71Updated 2 years ago
- Large Language Model Onnx Inference Framework☆31Updated 2 months ago
- llm theoretical performance analysis tools and support params, flops, memory and latency analysis.☆79Updated 2 months ago
- Tutorials for writing high-performance GPU operators in AI frameworks.☆129Updated last year
- qwen2 and llama3 cpp implementation☆43Updated 9 months ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆107Updated 3 weeks ago
- Transformer related optimization, including BERT, GPT☆59Updated last year
- Standalone Flash Attention v2 kernel without libtorch dependency☆105Updated 6 months ago
- Compare multiple optimization methods on triton to imporve model service performance☆50Updated last year
- OneFlow->ONNX☆42Updated last year
- ☆90Updated last year