bytedance / ByteTransformerLinks
optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052
☆479Updated last year
Alternatives and similar repositories for ByteTransformer
Users that are interested in ByteTransformer are comparing it to the libraries listed below
Sorting:
- ☆139Updated last year
- ☆129Updated 10 months ago
- Transformer related optimization, including BERT, GPT☆59Updated 2 years ago
- FlagGems is an operator library for large language models implemented in the Triton Language.☆703Updated this week
- Running BERT without Padding☆475Updated 3 years ago
- A collection of memory efficient attention operators implemented in the Triton language.☆282Updated last year
- ☆150Updated 9 months ago
- ☆507Updated last month
- ☆219Updated 2 years ago
- AI Accelerator Benchmark focuses on evaluating AI Accelerators from a practical production perspective, including the ease of use and ver…☆265Updated 2 months ago
- A model compilation solution for various hardware☆451Updated 2 months ago
- BladeDISC is an end-to-end DynamIc Shape Compiler project for machine learning workloads.☆899Updated 9 months ago
- Zero Bubble Pipeline Parallelism☆432Updated 5 months ago
- RTP-LLM: Alibaba's high-performance LLM inference engine for diverse applications.☆892Updated last week
- ☆148Updated 7 months ago
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆41Updated 7 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆156Updated 2 weeks ago
- ☆59Updated 11 months ago
- A simple high performance CUDA GEMM implementation.☆411Updated last year
- ☆141Updated last year
- ☆210Updated 11 months ago
- A Easy-to-understand TensorOp Matmul Tutorial☆385Updated 2 weeks ago
- GLake: optimizing GPU memory management and IO transmission.☆483Updated 7 months ago
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆269Updated 2 years ago
- An efficient GPU support for LLM inference with x-bit quantization (e.g. FP6,FP5).☆265Updated 3 months ago
- [USENIX ATC '24] Accelerating the Training of Large Language Models using Efficient Activation Rematerialization and Optimal Hybrid Paral…☆65Updated last year
- Yinghan's Code Sample☆353Updated 3 years ago
- Microsoft Automatic Mixed Precision Library☆626Updated last year
- flash attention tutorial written in python, triton, cuda, cutlass☆428Updated 5 months ago
- PyTorch distributed training acceleration framework☆53Updated 2 months ago