zhaohb / fastapi_tritonserverLinks
☆28Updated last year
Alternatives and similar repositories for fastapi_tritonserver
Users that are interested in fastapi_tritonserver are comparing it to the libraries listed below
Sorting:
- ☆90Updated 2 years ago
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆274Updated 6 months ago
- llm-export can export llm model to onnx.☆343Updated 3 months ago
- Compare multiple optimization methods on triton to imporve model service performance☆52Updated 2 years ago
- ☆624Updated last year
- Triton Inferece Server Model Config and Client Scripts☆32Updated 4 years ago
- ☆55Updated last year
- export llama to onnx☆137Updated last year
- run ChatGLM2-6B in BM1684X☆49Updated last year
- PaddlePaddle custom device implementaion. (『飞桨』自定义硬件接入实现)☆101Updated last week
- 大模型部署实战:TensorRT-LLM, Triton Inference Server, vLLM☆27Updated last year
- ☆74Updated last week
- LLM 推理服务性能测试☆44Updated 2 years ago
- ☆130Updated last year
- Transformer related optimization, including BERT, GPT☆39Updated 3 years ago
- Serving Inside Pytorch☆170Updated last week
- ☆183Updated 2 weeks ago
- Large Language Model Onnx Inference Framework☆36Updated 2 months ago
- Optimize QWen1.5 models with TensorRT-LLM☆17Updated last year
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆51Updated 2 years ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆140Updated last year
- ☆141Updated last year
- Transformer related optimization, including BERT, GPT☆17Updated 2 years ago
- LLaMa/RWKV onnx models, quantization and testcase☆366Updated 2 years ago
- ☆269Updated 2 months ago
- ☆26Updated 2 years ago
- LLM101n: Let's build a Storyteller 中文版☆137Updated last year
- ☆523Updated 2 weeks ago
- simplify >2GB large onnx model☆71Updated last year
- qwen2 and llama3 cpp implementation☆49Updated last year