void-main / FasterTransformerLinks
Transformer related optimization, including BERT, GPT
☆59Updated last year
Alternatives and similar repositories for FasterTransformer
Users that are interested in FasterTransformer are comparing it to the libraries listed below
Sorting:
- ☆139Updated last year
- ☆127Updated 5 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆38Updated 3 months ago
- An easy-to-use package for implementing SmoothQuant for LLMs☆100Updated 2 months ago
- Transformer related optimization, including BERT, GPT☆39Updated 2 years ago
- ☆148Updated 5 months ago
- ☆141Updated 3 months ago
- ☆21Updated last year
- ☆96Updated 9 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆127Updated 5 months ago
- ☆79Updated last year
- Simple Dynamic Batching Inference☆145Updated 3 years ago
- A collection of memory efficient attention operators implemented in the Triton language.☆272Updated last year
- ☆194Updated last month
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆55Updated 10 months ago
- ☆58Updated 7 months ago
- QQQ is an innovative and hardware-optimized W4A8 quantization solution for LLMs.☆126Updated 2 months ago
- ☆119Updated last year
- Transformer related optimization, including BERT, GPT☆17Updated last year
- Benchmark code for the "Online normalizer calculation for softmax" paper☆94Updated 6 years ago
- Running BERT without Padding☆471Updated 3 years ago
- ☆86Updated 2 months ago
- ☆135Updated last year
- export llama to onnx☆126Updated 5 months ago
- A standalone GEMM kernel for fp16 activation and quantized weight, extracted from FasterTransformer☆92Updated 3 weeks ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆474Updated last year
- ☆97Updated 2 months ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆210Updated 10 months ago
- 使用 cutlass 仓库在 ada 架构上实现 fp8 的 flash attention☆70Updated 10 months ago
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.☆96Updated last year