DataXujing / TensorRT-LLM-ChatGLM3Links
大模型部署实战:TensorRT-LLM, Triton Inference Server, vLLM
☆26Updated last year
Alternatives and similar repositories for TensorRT-LLM-ChatGLM3
Users that are interested in TensorRT-LLM-ChatGLM3 are comparing it to the libraries listed below
Sorting:
- 天池 NVIDIA TensorRT Hackathon 2023 —— 生成式AI模型优化赛 初赛第三名方案☆49Updated last year
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆42Updated last year
- HunyuanDiT with TensorRT and libtorch☆17Updated last year
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆48Updated last year
- ffmpeg+cuvid+tensorrt+multicamera☆12Updated 5 months ago
- Large Language Model Onnx Inference Framework☆35Updated 5 months ago
- Transformer related optimization, including BERT, GPT☆17Updated last year
- ☆120Updated 2 years ago
- ☆10Updated 11 months ago
- run ChatGLM2-6B in BM1684X☆49Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆16Updated last year
- qwen2 and llama3 cpp implementation☆44Updated last year
- An onnx-based quantitation tool.☆71Updated last year
- a simple lightweight large language model pipeline framework.☆25Updated 2 months ago
- async inference for machine learning model☆26Updated 2 years ago
- Inference deployment of the llama3☆11Updated last year
- ☆19Updated last year
- c++实现的clip推理,模型有一点点改动,但是不大,改动和导出模型的代码可以在readme里找到,模型文件都在Releases里,包括AX650的模型。新增支持ChineseCLIP☆30Updated this week
- ☆22Updated last year
- 使用 CUDA C++ 实现的 llama 模型推理框架☆57Updated 7 months ago
- A tool convert TensorRT engine/plan to a fake onnx☆39Updated 2 years ago
- ☆26Updated last year
- ☆90Updated last year
- ☆27Updated 7 months ago
- A set of examples around MegEngine☆31Updated last year
- ☆27Updated this week
- TensorRT简明教程☆26Updated 3 years ago
- Awesome code, projects, books, etc. related to CUDA☆17Updated last week
- YOLOv12 TensorRT 端到端模型加速推理和INT8量化实现☆12Updated 3 months ago
- Stable Diffusion in TensorRT 8.5+☆14Updated 2 years ago