triton-inference-server / fastertransformer_backendLinks
☆413Updated 2 years ago
Alternatives and similar repositories for fastertransformer_backend
Users that are interested in fastertransformer_backend are comparing it to the libraries listed below
Sorting:
- Fast Inference Solutions for BLOOM☆565Updated last year
- Large-scale model inference.☆628Updated 2 years ago
- Running BERT without Padding☆476Updated 3 years ago
- GPTQ inference Triton kernel☆317Updated 2 years ago
- Common source, scripts and utilities for creating Triton backends.☆363Updated this week
- The Triton TensorRT-LLM Backend☆909Updated last week
- ☆122Updated last year
- Serving multiple LoRA finetuned LLM as one☆1,128Updated last year
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆501Updated last week
- Provides end-to-end model development pipelines for LLMs and Multimodal models that can be launched on-prem or cloud-native.☆509Updated 8 months ago
- The Triton backend for the ONNX Runtime.☆170Updated 2 weeks ago
- 🏋️ A unified multi-backend utility for benchmarking Transformers, Timm, PEFT, Diffusers and Sentence-Transformers with full support of O…☆325Updated 3 months ago
- Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.☆662Updated last week
- Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.☆669Updated 2 weeks ago
- ☆206Updated 7 months ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆476Updated last year
- ☆321Updated last week
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,426Updated last year
- Microsoft Automatic Mixed Precision Library☆634Updated 3 weeks ago
- OpenAI compatible API for TensorRT LLM triton backend☆218Updated last year
- Easy and Efficient Quantization for Transformers☆202Updated 6 months ago
- Scalable PaLM implementation of PyTorch☆189Updated 3 years ago
- ☆219Updated 2 years ago
- ☆22Updated 2 years ago
- A high-performance inference system for large language models, designed for production environments.☆489Updated last week
- Automatically split your PyTorch models on multiple GPUs for training & inference☆657Updated last year
- Code used for sourcing and cleaning the BigScience ROOTS corpus☆317Updated 2 years ago
- Central place for the engineering/scaling WG: documentation, SLURM scripts and logs, compute environment and data.☆1,007Updated last year
- Dynamic batching library for Deep Learning inference. Tutorials for LLM, GPT scenarios.☆106Updated last year
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆962Updated last year