NVIDIA / FasterTransformer
Transformer related optimization, including BERT, GPT
☆6,158Updated last year
Alternatives and similar repositories for FasterTransformer
Users that are interested in FasterTransformer are comparing it to the libraries listed below
Sorting:
- Fast and memory-efficient exact attention☆17,346Updated last week
- A library for accelerating Transformer models on NVIDIA GPUs, including using 8-bit floating point (FP8) precision on Hopper, Ada and Bla…☆2,412Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆7,020Updated this week
- Ongoing research training transformer models at scale☆12,358Updated this week
- PyTorch extensions for high performance and large scale training.☆3,317Updated 3 weeks ago
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,005Updated last week
- Foundation Architecture for (M)LLMs☆3,077Updated last year
- MII makes low-latency and high-throughput inference possible, powered by DeepSpeed.☆2,010Updated last month
- 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (i…☆8,708Updated this week
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆2,071Updated last month
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,104Updated last year
- AITemplate is a Python framework which renders neural network into high performance CUDA/HIP C++ code. Specialized for FP16 TensorCore (N…☆4,633Updated last month
- Hackable and optimized Transformers building blocks, supporting a composable construction.☆9,476Updated 3 weeks ago
- Development repository for the Triton language and compiler☆15,568Updated this week
- TensorRT-LLM provides users with an easy-to-use Python API to define Large Language Models (LLMs) and support state-of-the-art optimizati…☆10,508Updated this week
- FlashInfer: Kernel Library for LLM Serving☆2,966Updated this week
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆4,846Updated last month
- CUDA Templates for Linear Algebra Subroutines☆7,540Updated this week
- PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT☆2,749Updated this week
- A machine learning compiler for GPUs, CPUs, and ML accelerators☆3,169Updated this week
- SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX R…☆2,402Updated this week
- 🚀 Accelerate inference and training of 🤗 Transformers, Diffusers, TIMM and Sentence Transformers with easy to use hardware optimization…☆2,898Updated this week
- Large Language Model Text Generation Inference☆10,119Updated this week
- LightLLM is a Python-based LLM (Large Language Model) inference and serving framework, notable for its lightweight design, easy scalabili…☆3,221Updated this week
- Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackab…☆1,565Updated last year
- LMDeploy is a toolkit for compressing, deploying, and serving LLMs.☆6,357Updated this week
- [ICML 2023] SmoothQuant: Accurate and Efficient Post-Training Quantization for Large Language Models☆1,405Updated 10 months ago
- AutoAWQ implements the AWQ algorithm for 4-bit quantization with a 2x speedup during inference. Documentation:☆2,162Updated last week
- Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"☆11,921Updated 5 months ago
- A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch☆8,661Updated this week