TRT2022 / trtllm-llama
☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化
☆47Updated last year
Alternatives and similar repositories for trtllm-llama:
Users that are interested in trtllm-llama are comparing it to the libraries listed below
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆42Updated last year
- ☆24Updated last year
- simplify >2GB large onnx model☆56Updated 5 months ago
- 天池 NVIDIA TensorRT Hackathon 2023 —— 生成式AI模型优化赛 初赛第三名方案☆49Updated last year
- 使用 CUDA C++ 实现的 llama 模型推理框架☆54Updated 6 months ago
- A light llama-like llm inference framework based on the triton kernel.☆113Updated this week
- Transformer related optimization, including BERT, GPT☆17Updated last year
- ☆58Updated 5 months ago
- run ChatGLM2-6B in BM1684X☆49Updated last year
- 大模型部署实战:TensorRT-LLM, Triton Inference Server, vLLM☆26Updated last year
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆36Updated 2 months ago
- ☆90Updated last year
- Decoding Attention is specially optimized for MHA, MQA, GQA and MLA using CUDA core for the decoding stage of LLM inference.☆36Updated last month
- ☆139Updated last year
- TensorRT encapsulation, learn, rewrite, practice.☆28Updated 2 years ago
- ☆36Updated 6 months ago
- TensorRT-in-Action 是一个 GitHub 代码库,提供了使用 TensorRT 的代码示例,并有对应 Jupyter Notebook。☆16Updated last year
- Large Language Model Onnx Inference Framework☆33Updated 3 months ago
- llm deploy project based onnx.☆36Updated 7 months ago
- An easy-to-use package for implementing SmoothQuant for LLMs☆97Updated last month
- ☆127Updated 4 months ago
- export llama to onnx☆124Updated 4 months ago
- qwen2 and llama3 cpp implementation☆44Updated 11 months ago
- 彻底弄懂BP反向传播,15行代码,C++实现也简单,MNIST分类98.29%精度☆34Updated 3 years ago
- ☆28Updated 3 months ago
- ☆16Updated last year
- ☆120Updated last year
- ☆71Updated 2 years ago
- An onnx-based quantitation tool.☆71Updated last year
- TensorRT 2022复赛方案: 首个基于Transformer的图像重建模型MST++的TensorRT模型推断优化☆139Updated 2 years ago