triton-inference-server / fastertransformer_backendLinks
☆412Updated last year
Alternatives and similar repositories for fastertransformer_backend
Users that are interested in fastertransformer_backend are comparing it to the libraries listed below
Sorting:
- Fast Inference Solutions for BLOOM☆565Updated 11 months ago
- Large-scale model inference.☆632Updated 2 years ago
- Running BERT without Padding☆475Updated 3 years ago
- GPTQ inference Triton kernel☆309Updated 2 years ago
- Common source, scripts and utilities for creating Triton backends.☆348Updated 3 weeks ago
- The Triton TensorRT-LLM Backend☆894Updated this week
- Serving multiple LoRA finetuned LLM as one☆1,093Updated last year
- The Triton backend for the ONNX Runtime.☆162Updated last week
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆491Updated 3 weeks ago
- Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.☆644Updated 2 weeks ago
- ☆298Updated last week
- ☆121Updated last year
- Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.☆648Updated this week
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆902Updated last year
- Microsoft Automatic Mixed Precision Library☆622Updated last year
- Scalable PaLM implementation of PyTorch☆188Updated 2 years ago
- ☆200Updated 5 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,422Updated last year
- ☆547Updated 9 months ago
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆315Updated last week
- Transformer related optimization, including BERT, GPT☆39Updated 2 years ago
- Dynamic batching library for Deep Learning inference. Tutorials for LLM, GPT scenarios.☆103Updated last year
- OpenAI compatible API for TensorRT LLM triton backend☆215Updated last year
- ☆21Updated 2 years ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆478Updated last year
- Transformer related optimization, including BERT, GPT☆59Updated 2 years ago
- Easy and Efficient Quantization for Transformers☆203Updated 3 months ago
- ☆220Updated 2 years ago
- A high-performance inference system for large language models, designed for production environments.☆472Updated last week
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,063Updated 3 months ago