void-main / FasterTransformer
Transformer related optimization, including BERT, GPT
☆59Updated last year
Alternatives and similar repositories for FasterTransformer:
Users that are interested in FasterTransformer are comparing it to the libraries listed below
- ☆139Updated 11 months ago
- ☆127Updated 3 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆35Updated last month
- Transformer related optimization, including BERT, GPT☆39Updated 2 years ago
- ☆148Updated 3 months ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆111Updated last week
- An easy-to-use package for implementing SmoothQuant for LLMs☆96Updated last week
- ☆21Updated last year
- ☆130Updated last month
- PyTorch bindings for CUTLASS grouped GEMM.☆118Updated 3 months ago
- ☆78Updated last year
- A collection of memory efficient attention operators implemented in the Triton language.☆262Updated 10 months ago
- ☆91Updated 7 months ago
- ☆117Updated last year
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆51Updated 8 months ago
- ☆58Updated 4 months ago
- Benchmark code for the "Online normalizer calculation for softmax" paper☆90Updated 6 years ago
- Transformer related optimization, including BERT, GPT☆17Updated last year
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆95Updated last year
- export llama to onnx☆121Updated 3 months ago
- ☆88Updated 2 weeks ago
- ☆78Updated 3 weeks ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆91Updated 2 weeks ago
- ☆185Updated 6 months ago
- Summary of system papers/frameworks/codes/tools on training or serving large model☆56Updated last year
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆278Updated last month
- A quantization algorithm for LLM☆139Updated 9 months ago
- [ACL 2024] A novel QAT with Self-Distillation framework to enhance ultra low-bit LLMs.☆108Updated 11 months ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆472Updated last year
- ☆117Updated last year