bytedance / effective_transformerLinks
Running BERT without Padding
☆476Updated 3 years ago
Alternatives and similar repositories for effective_transformer
Users that are interested in effective_transformer are comparing it to the libraries listed below
Sorting:
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆476Updated last year
- ☆219Updated 2 years ago
- Transformer related optimization, including BERT, GPT☆59Updated 2 years ago
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆271Updated 2 years ago
- DeepLearning Framework Performance Profiling Toolkit☆294Updated 3 years ago
- ☆413Updated 2 years ago
- Zero Bubble Pipeline Parallelism☆449Updated 8 months ago
- A collection of memory efficient attention operators implemented in the Triton language.☆287Updated last year
- Simple Dynamic Batching Inference☆145Updated 3 years ago
- OneFlow models for benchmarking.☆104Updated last year
- ☆130Updated last year
- Transformer related optimization, including BERT, GPT☆39Updated 2 years ago
- Microsoft Automatic Mixed Precision Library☆635Updated 2 months ago
- ☆141Updated last year
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆405Updated 6 months ago
- Easy and Efficient Transformer : Scalable Inference Solution For Large NLP model☆265Updated last year
- PyTorch bindings for CUTLASS grouped GEMM.☆185Updated last month
- ☆125Updated last year
- Best practice for training LLaMA models in Megatron-LM☆664Updated 2 years ago
- a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.☆1,541Updated 6 months ago
- A high-performance framework for training wide-and-deep recommender systems on heterogeneous cluster☆160Updated last year
- ☆22Updated 2 years ago
- A Fast Muti-processing BERT-Inference System☆102Updated 3 years ago
- Fast implementation of BERT inference directly on NVIDIA (CUDA, CUBLAS) and Intel MKL☆547Updated 5 years ago
- Latency and Memory Analysis of Transformer Models for Training and Inference☆479Updated 9 months ago
- oneflow documentation☆69Updated last year
- A high-performance distributed deep learning system targeting large-scale and automated distributed training.☆333Updated last month
- PyTorch distributed training acceleration framework☆55Updated 5 months ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆222Updated last year
- Models and examples built with OneFlow☆101Updated last year