NetEase-Media / grps_trtllm
Higher performance OpenAI LLM service than vLLM serve: A pure C++ high-performance OpenAI LLM service implemented with GPRS+TensorRT-LLM+Tokenizers.cpp, supporting chat and function call, AI agents, distributed multi-GPU inference, multimodal capabilities, and a Gradio chat interface.
☆134Updated this week
Alternatives and similar repositories for grps_trtllm
Users that are interested in grps_trtllm are comparing it to the libraries listed below
Sorting:
- Deep Learning Deployment Framework: Supports tf/torch/trt/trtllm/vllm and other NN frameworks. Support dynamic batching, and streaming mo…☆157Updated last week
- Support mixed-precsion inference with vllm☆83Updated 4 months ago
- An acceleration library that supports arbitrary bit-width combinatorial quantization operations☆223Updated 7 months ago
- A highly optimized LLM inference acceleration engine for Llama and its variants.☆886Updated this week
- MIXQ: Taming Dynamic Outliers in Mixed-Precision Quantization by Online Prediction☆89Updated 6 months ago
- 使用deepspeed从头开始训练一个LLM,经过pretrain和sft阶段,验证llm学习知识、理解语言、回答问题的能力☆153Updated 10 months ago
- 从预训练到强化学习的中文llama2☆88Updated last year
- 用VLLM框架部署千问1.5并进行流式输出☆90Updated last year
- Mixed precision inference by Tensorrt-LLM☆79Updated 6 months ago
- TengineGst is a streaming media analytics framework, based on GStreamer multimedia framework, for creating varied complex media analytics…☆59Updated 3 years ago
- llm deploy project based onnx.☆36Updated 7 months ago
- Chinese large language model☆120Updated last year
- 模型部署白皮书(CUDA|ONNX|TensorRT|C++)🚀🚀🚀☆203Updated 7 months ago
- This tool(enhance_long) aims to enhance the LlaMa2 long context extrapolation capability in the lowest-cost approach, preferably without …☆45Updated last year
- Ai edge toolbox,专门面向边端设备尤其是嵌入式RTOS平台,AI模型部署工具链,包括模型推理引擎和模型压缩工具☆154Updated last year
- Build CUDA Neural Network From Scratch☆19Updated 8 months ago
- 教你只用最基本的python语法和numpy一步步实现深度学习框架☆127Updated 9 months ago
- Reverse Chain-of-Thought Problem Generation for Geometric Reasoning in Large Multimodal Models☆174Updated 6 months ago
- The framework to prune LLMs to any size and any config.☆92Updated last year
- NVIDIA TensorRT Hackathon 2023复赛选题:通义千问Qwen-7B用TensorRT-LLM模型搭建及优化☆42Updated last year
- SegmentAnything-OnnxRunner is an example using Meta AI Research's SAM onnx model in C++.The encoder and decoder of SAM are decoupled in t…☆97Updated last year
- Large Language Model Onnx Inference Framework☆33Updated 4 months ago
- Official implementation of RARE: Retrieval-Augmented Reasoning Modeling [Work in Progress]☆89Updated last month
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆251Updated this week
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆47Updated last year
- ☆27Updated 6 months ago
- FlagPerf is an open-source software platform for benchmarking AI chips.☆331Updated last week
- 🚀 Do not need libtorch, pure C++ TensorRT deploys SOLOv2 etc, which can be quickly ported to NX/TX2.☆42Updated 2 years ago
- A repo that uses TensorRT to deploy wll-trained models.Support RTDETR,YOLO-NAS,YOLOV5,YOLOV6,YOLOV7,YOLOV8,YOLOX.☆107Updated last year
- Serving Inside Pytorch☆160Updated last week