triton-inference-server / fastertransformer_backend
☆411Updated last year
Alternatives and similar repositories for fastertransformer_backend:
Users that are interested in fastertransformer_backend are comparing it to the libraries listed below
- Fast Inference Solutions for BLOOM☆562Updated 5 months ago
- Running BERT without Padding☆471Updated 3 years ago
- The Triton TensorRT-LLM Backend☆816Updated this week
- GPTQ inference Triton kernel☆300Updated last year
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆783Updated 7 months ago
- Common source, scripts and utilities for creating Triton backends.☆311Updated last week
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆466Updated 3 weeks ago
- The Triton backend for the ONNX Runtime.☆140Updated 3 weeks ago
- ☆238Updated this week
- Serving multiple LoRA finetuned LLM as one☆1,044Updated 10 months ago
- ☆116Updated last year
- Microsoft Automatic Mixed Precision Library☆587Updated 6 months ago
- Large-scale model inference.☆628Updated last year
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆471Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,380Updated last year
- ☆185Updated 6 months ago
- Code used for sourcing and cleaning the BigScience ROOTS corpus☆309Updated 2 years ago
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆1,998Updated last week
- Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.☆597Updated last week
- Easy and Efficient Quantization for Transformers☆195Updated last month
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆683Updated 7 months ago
- LLaMa/RWKV onnx models, quantization and testcase☆359Updated last year
- ☆543Updated 3 months ago
- Scalable PaLM implementation of PyTorch☆191Updated 2 years ago
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆199Updated 2 months ago
- OpenAI compatible API for TensorRT LLM triton backend☆202Updated 8 months ago
- Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.☆613Updated 2 weeks ago
- Official repository for LongChat and LongEval☆515Updated 10 months ago
- The Triton backend for the PyTorch TorchScript models.☆144Updated 3 weeks ago
- Comparison of Language Model Inference Engines☆210Updated 3 months ago