NVIDIA / FasterTransformerLinks
Transformer related optimization, including BERT, GPT
☆6,392Updated last year
Alternatives and similar repositories for FasterTransformer
Users that are interested in FasterTransformer are comparing it to the libraries listed below
Sorting:
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit and 4-bit floating point (FP8 and FP4) precision on H…☆3,132Updated this week
- Ongoing research training transformer models at scale☆15,100Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆7,931Updated 2 weeks ago
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,431Updated 6 months ago
- PyTorch extensions for high performance and large scale training.☆3,397Updated 9 months ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,224Updated 5 months ago
- Fast and memory-efficient exact attention☆22,113Updated this week
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,249Updated last year
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,092Updated 7 months ago
- LightSeq: A High Performance Library for Sequence Processing and Generation☆3,304Updated 2 years ago
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆3,875Updated this week
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆4,702Updated 3 weeks ago
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,600Updated last year
- FlashInfer: Kernel Library for LLM Serving☆4,853Updated this week
- Automatically Discovering Fast Parallelization Strategies for Distributed Deep Neural Network Training☆1,859Updated last week
- TensorRT LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and supports state-of-the-art optimizat…☆12,811Updated this week
- Foundation Architecture for (M)LLMs☆3,132Updated last year
- Training and serving large-scale neural networks with auto parallelization.☆3,183Updated 2 years ago
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆9,477Updated last week
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆10,326Updated this week
- 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization…☆3,277Updated 3 weeks ago
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,587Updated last week
- Example models using DeepSpeed☆6,779Updated last month
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆1,431Updated last year
- Development repository for the Triton language and compiler☆18,319Updated this week
- SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on PyTorch, TensorFlow, …☆2,577Updated last week
- CUDA Templates and Python DSLs for High-Performance Linear Algebra☆9,226Updated this week
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,699Updated last year
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,939Updated this week
- The Triton Inference Server provides an optimized cloud and edge inferencing solution.☆10,298Updated last week