NVIDIA / TransformerEngineLinks
A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on Hopper, Ada and Blackwell GPUs, to provide better performance with lower memory utilization in both training and inference.
☆3,116Updated this week
Alternatives and similar repositories for TransformerEngine
Users that are interested in TransformerEngine are comparing it to the libraries listed below
Sorting:
- FlashInfer: Kernel Library for LLM Serving☆4,707Updated last week
- PyTorch native quantization and sparsity for training and inference☆2,645Updated this week
- Transformer related optimization, including BERT, GPT☆6,384Updated last year
- A unified library of SOTA model optimization techniques like quantization, pruning, distillation, speculative decoding, etc. It compresse…☆1,848Updated last week
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,590Updated last year
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,420Updated 6 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,220Updated 5 months ago
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,090Updated 6 months ago
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,247Updated last year
- SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on PyTorch, TensorFlow, …☆2,574Updated this week
- Pipeline Parallelism for PyTorch☆785Updated last year
- Tile primitives for speedy kernels☆3,096Updated last week
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,857Updated this week
- PyTorch extensions for high performance and large scale training.☆3,393Updated 9 months ago
- 🚀 Efficient implementations of state-of-the-art linear attention models☆4,282Updated this week
- Distributed Compiler based on Triton for Parallel Systems☆1,315Updated 3 weeks ago
- Puzzles for learning Triton☆2,246Updated last year
- NCCL Tests☆1,406Updated last week
- Mirage Persistent Kernel: Compiling LLMs into a MegaKernel☆2,091Updated this week
- A fast communication-overlapping library for tensor/expert parallelism on GPUs.☆1,231Updated 4 months ago
- Minimalistic large language model 3D-parallelism training☆2,497Updated last month
- FP16xINT4 LLM inference kernel that can achieve near-ideal ~4x speedups up to medium batchsizes of 16-32 tokens.☆992Updated last year
- The Triton TensorRT-LLM Backend☆917Updated this week
- Microsoft Automatic Mixed Precision Library☆635Updated last month
- Tutel MoE: Optimized Mixture-of-Experts Library, Support GptOss/DeepSeek/Kimi-K2/Qwen3 using FP8/NVFP4/MXFP4☆956Updated last month
- Flash Attention in ~100 lines of CUDA (forward pass only)☆1,047Updated last year
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,429Updated last year
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆2,615Updated this week
- Domain-specific language designed to streamline the development of high-performance GPU/CPU/Accelerators kernels☆4,806Updated this week
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,315Updated 10 months ago