zhaohb / fastapi_tritonserverLinks
☆27Updated last year
Alternatives and similar repositories for fastapi_tritonserver
Users that are interested in fastapi_tritonserver are comparing it to the libraries listed below
Sorting:
- ☆90Updated 2 years ago
- DashInfer is a native LLM inference engine aiming to deliver industry-leading performance atop various hardware architectures, including …☆268Updated 3 months ago
- llm-export can export llm model to onnx.☆328Updated 3 weeks ago
- ☆626Updated last year
- ☆52Updated last year
- export llama to onnx☆136Updated 10 months ago
- Compare multiple optimization methods on triton to imporve model service performance☆52Updated last year
- Triton Inferece Server Model Config and Client Scripts☆32Updated 3 years ago
- run ChatGLM2-6B in BM1684X☆50Updated last year
- Serving Inside Pytorch☆165Updated this week
- LLaMa/RWKV onnx models, quantization and testcase☆367Updated 2 years ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆138Updated 11 months ago
- ☆72Updated 2 years ago
- 高性能文本 Tokenizer 库☆31Updated last year
- Large Language Model Onnx Inference Framework☆36Updated 3 weeks ago
- 纯c++的全平台llm加速库,支持python调用,支持baichuan, glm, llama, moss基座,手机端流畅运行chatglm-6B级模型单卡可达10000+token / s,☆45Updated 2 years ago
- Transformer related optimization, including BERT, GPT☆39Updated 2 years ago
- ☆512Updated 2 months ago
- PaddlePaddle custom device implementaion. (『飞桨』自定义硬件接入实现)☆98Updated last week
- Optimize QWen1.5 models with TensorRT-LLM☆17Updated last year
- qwen2 and llama3 cpp implementation☆48Updated last year
- ☆267Updated this week
- simplify >2GB large onnx model☆66Updated 11 months ago
- Simple Dynamic Batching Inference☆145Updated 3 years ago
- ☆177Updated this week
- 视觉信息抽取任务中,使用OCR识别结果规范多模态大模型的回答☆42Updated 10 months ago
- ☆65Updated last week
- Transformer related optimization, including BERT, GPT☆17Updated 2 years ago
- LLM 推理服务性能测试☆44Updated last year
- ☢️ TensorRT 2023复赛——基于TensorRT-LLM的Llama模型推断加速优化☆50Updated 2 years ago