void-main / FasterTransformerLinks
Transformer related optimization, including BERT, GPT
☆59Updated last year
Alternatives and similar repositories for FasterTransformer
Users that are interested in FasterTransformer are comparing it to the libraries listed below
Sorting:
- ☆139Updated last year
- ☆128Updated 6 months ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆475Updated last year
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆39Updated 4 months ago
- Running BERT without Padding☆472Updated 3 years ago
- PyTorch bindings for CUTLASS grouped GEMM.☆130Updated 6 months ago
- An easy-to-use package for implementing SmoothQuant for LLMs☆102Updated 3 months ago
- A collection of memory efficient attention operators implemented in the Triton language.☆272Updated last year
- ☆79Updated last year
- ☆142Updated 4 months ago
- Transformer related optimization, including BERT, GPT☆39Updated 2 years ago
- export llama to onnx☆128Updated 6 months ago
- ☆195Updated 2 months ago
- ☆149Updated 6 months ago
- ☆21Updated 2 years ago
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆58Updated 11 months ago
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆302Updated 4 months ago
- ☆96Updated 10 months ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆133Updated 3 months ago
- Zero Bubble Pipeline Parallelism☆403Updated 2 months ago
- Simple Dynamic Batching Inference☆145Updated 3 years ago
- ☆59Updated 7 months ago
- ☆120Updated last year
- ☆220Updated last year
- ☆137Updated last year
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆524Updated last month
- Pipeline Parallelism Emulation and Visualization☆45Updated last month
- A quantization algorithm for LLM☆141Updated last year
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆254Updated 8 months ago
- ☆87Updated 3 months ago