zhaohb / fastapi_tritonserverLinks
☆27Updated 9 months ago
Alternatives and similar repositories for fastapi_tritonserver
Users that are interested in fastapi_tritonserver are comparing it to the libraries listed below
Sorting:
- ☆90Updated 2 years ago
- ☆616Updated last year
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆263Updated last week
- llm-export can export llm model to onnx.☆301Updated 6 months ago
- ☆49Updated 9 months ago
- export llama to onnx☆130Updated 7 months ago
- Compare multiple optimization methods on triton to imporve model service performance☆52Updated last year
- run ChatGLM2-6B in BM1684X☆49Updated last year
- A high-throughput and memory-efficient inference and serving engine for LLMs☆136Updated 8 months ago
- LLM 推理服务性能测试☆44Updated last year
- Transformer related optimization, including BERT, GPT☆39Updated 2 years ago
- ☆170Updated this week
- Transformer related optimization, including BERT, GPT☆17Updated 2 years ago
- Triton Inferece Server Model Config and Client Scripts☆32Updated 3 years ago
- PaddlePaddle custom device implementaion. (『飞桨』自定义硬件接入实现)☆89Updated this week
- ☆128Updated 7 months ago
- ☆52Updated last week
- Serving Inside Pytorch☆163Updated this week
- Large Language Model Onnx Inference Framework☆36Updated 6 months ago
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆50Updated last year
- 纯c++的全平台llm加速库,支持python调用,支持baichuan, glm, llama, moss基座,手机端流畅运行chatglm-6B级模型单卡可达10000+token / s,☆45Updated last year
- LLaMa/RWKV onnx models, quantization and testcase☆362Updated 2 years ago
- ☆474Updated last week
- vLLM Documentation in Chinese Simplified / vLLM 中文文档☆90Updated 2 months ago
- 大模型部署实战:TensorRT-LLM, Triton Inference Server, vLLM☆26Updated last year
- LLM101n: Let's build a Storyteller 中文版☆131Updated 11 months ago
- Optimize QWen1.5 models with TensorRT-LLM☆17Updated last year
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆823Updated last week
- Accelerate inference without tears☆321Updated 4 months ago
- simplify >2GB large onnx model☆61Updated 8 months ago