triton-inference-server / fastertransformer_backendView external linksLinks
☆413Nov 11, 2023Updated 2 years ago
Alternatives and similar repositories for fastertransformer_backend
Users that are interested in fastertransformer_backend are comparing it to the libraries listed below
Sorting:
- ☆22Jul 11, 2023Updated 2 years ago
- Transformer related optimization, including BERT, GPT☆6,392Mar 27, 2024Updated last year
- Common source, scripts and utilities for creating Triton backends.☆367Updated this week
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆10,334Feb 6, 2026Updated last week
- 매주 목요일, 20:00 모임☆16Jul 24, 2020Updated 5 years ago
- Triton backend that enables pre-process, post-processing and other logic to be implemented in Python.☆667Feb 7, 2026Updated last week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,152Feb 7, 2026Updated last week
- Efficient, scalable and enterprise-grade CPU/GPU inference server for 🤗 Hugging Face transformer models 🚀☆1,689Oct 23, 2024Updated last year
- The Triton TensorRT-LLM Backend☆918Updated this week
- Triton Python, C++ and Java client libraries, and GRPC-generated client examples for go, java and scala.☆677Feb 6, 2026Updated last week
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,093Jun 30, 2025Updated 7 months ago
- Triton Model Analyzer is a CLI tool to help with better understanding of the compute and memory requirements of the Triton Inference Serv…☆504Feb 3, 2026Updated last week
- Transformer related optimization, including BERT, GPT☆59Sep 20, 2023Updated 2 years ago
- PyTriton is a Flask/FastAPI-like interface that simplifies Triton's deployment in Python environments.☆832Aug 13, 2025Updated 6 months ago
- Transformer related optimization, including BERT, GPT☆17Jul 29, 2023Updated 2 years ago
- Large-scale model inference.☆627Sep 12, 2023Updated 2 years ago
- MeCab model trained with OpenKorPos.☆23Jun 19, 2022Updated 3 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,433Mar 20, 2024Updated last year
- a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.☆1,542Jul 18, 2025Updated 6 months ago
- Large Language Model Text Generation Inference☆10,757Jan 8, 2026Updated last month
- ☆11Aug 12, 2020Updated 5 years ago
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,254Mar 27, 2024Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,224Aug 14, 2025Updated 6 months ago
- Fast and memory-efficient exact attention☆22,231Updated this week
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆12,867Updated this week
- Deploy KoGPT with Triton Inference Server☆14Nov 18, 2022Updated 3 years ago
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,586Jan 28, 2026Updated 2 weeks ago
- Triton Model Navigator is an inference toolkit designed for optimizing and deploying Deep Learning models with a focus on NVIDIA GPUs.☆217Feb 3, 2026Updated last week
- Ongoing research training transformer models at scale☆15,162Updated this week
- Transformer related optimization, including BERT, GPT☆39Feb 10, 2023Updated 3 years ago
- LightSeq: A High Performance Library for Sequence Processing and Generation☆3,304May 16, 2023Updated 2 years ago
- PyTorch extensions for high performance and large scale training.☆3,397Apr 26, 2025Updated 9 months ago
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆3,888Updated this week
- Parallelformers: An Efficient Model Parallelization Toolkit for Deployment☆791Apr 24, 2023Updated 2 years ago
- Accessible large language models via k-bit quantization for PyTorch.☆7,939Jan 22, 2026Updated 3 weeks ago
- Tiny configuration for Triton Inference Server☆45Jan 10, 2025Updated last year
- NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source compone…☆12,672Feb 4, 2026Updated last week
- Running BERT without Padding☆480Mar 18, 2022Updated 3 years ago
- ☆130Dec 24, 2024Updated last year