triton-inference-server / fastertransformer_backend
☆411Updated last year
Alternatives and similar repositories for fastertransformer_backend
Users that are interested in fastertransformer_backend are comparing it to the libraries listed below
Sorting:
- Fast Inference Solutions for BLOOM☆561Updated 7 months ago
- Running BERT without Padding☆471Updated 3 years ago
- GPTQ inference Triton kernel☆299Updated last year
- ☆117Updated last year
- Common source, scripts and utilities for creating Triton backends.☆321Updated last week
- Large-scale model inference.☆629Updated last year
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆818Updated 8 months ago
- The Triton TensorRT-LLM Backend☆833Updated this week
- LLaMa/RWKV onnx models, quantization and testcase☆363Updated last year
- Microsoft Automatic Mixed Precision Library☆595Updated 7 months ago
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆476Updated 3 weeks ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,386Updated last year
- The Triton backend for the ONNX Runtime.☆144Updated last week
- Serving multiple LoRA finetuned LLM as one☆1,060Updated last year
- [ICML 2024] SqueezeLLM: Dense-and-Sparse Quantization☆687Updated 9 months ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆473Updated last year
- ☆190Updated last week
- OpenAI compatible API for TensorRT LLM triton backend☆205Updated 9 months ago
- Universal cross-platform tokenizers binding to HF and sentencepiece☆327Updated last week
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆300Updated this week
- Easy and Efficient Quantization for Transformers☆197Updated 3 months ago
- Scalable PaLM implementation of PyTorch☆190Updated 2 years ago
- ☆255Updated last week
- Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.☆620Updated this week
- Comparison of Language Model Inference Engines☆217Updated 4 months ago
- Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.☆608Updated this week
- Code used for sourcing and cleaning the BigScience ROOTS corpus☆311Updated 2 years ago
- A high-throughput and memory-efficient inference and serving engine for LLMs☆263Updated 7 months ago
- A throughput-oriented high-performance serving framework for LLMs☆806Updated this week
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,566Updated last year