zhaohb / fastapi_tritonserver
☆27Updated 4 months ago
Alternatives and similar repositories for fastapi_tritonserver:
Users that are interested in fastapi_tritonserver are comparing it to the libraries listed below
- ☆90Updated last year
- run ChatGLM2-6B in BM1684X☆49Updated last year
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆238Updated 2 weeks ago
- Compare multiple optimization methods on triton to imporve model service performance☆50Updated last year
- Triton Inferece Server Model Config and Client Scripts☆32Updated 3 years ago
- Large Language Model Onnx Inference Framework☆31Updated 2 months ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆132Updated 3 months ago
- 大模型部署实战:TensorRT-LLM, Triton Inference Server, vLLM☆26Updated last year
- ☆39Updated 4 months ago
- 纯c++的全平台llm加速库,支持python调用,支持baichuan, glm, llama, moss基座,手机端流畅运行chatglm-6B级模型单卡可达10000+token / s,☆45Updated last year
- llm-export can export llm model to onnx.