bytedance / effective_transformerLinks
Running BERT without Padding
☆475Updated 3 years ago
Alternatives and similar repositories for effective_transformer
Users that are interested in effective_transformer are comparing it to the libraries listed below
Sorting:
- Transformer related optimization, including BERT, GPT☆59Updated 2 years ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆479Updated last year
- ☆219Updated 2 years ago
- ☆413Updated 2 years ago
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆270Updated 2 years ago
- DeepLearning Framework Performance Profiling Toolkit☆294Updated 3 years ago
- A collection of memory efficient attention operators implemented in the Triton language.☆284Updated last year
- Simple Dynamic Batching Inference☆145Updated 3 years ago
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆407Updated 3 months ago
- Microsoft Automatic Mixed Precision Library☆626Updated last year
- Fast implementation of BERT inference directly on NVIDIA (CUDA, CUBLAS) and Intel MKL☆544Updated 4 years ago
- Zero Bubble Pipeline Parallelism☆433Updated 6 months ago
- ☆130Updated 10 months ago
- ☆121Updated last year
- Transformer related optimization, including BERT, GPT☆39Updated 2 years ago
- A Fast Muti-processing BERT-Inference System☆101Updated 3 years ago
- OneFlow models for benchmarking.☆104Updated last year
- Easy and Efficient Transformer : Scalable Inference Solution For Large NLP model☆264Updated 11 months ago
- ☆139Updated last year
- a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.☆1,535Updated 3 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆165Updated last month
- Latency and Memory Analysis of Transformer Models for Training and Inference☆461Updated 6 months ago
- oneflow documentation☆69Updated last year
- Pipeline Parallelism for PyTorch☆781Updated last year
- Best practice for training LLaMA models in Megatron-LM☆659Updated last year
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆41Updated 8 months ago
- ☆22Updated 2 years ago
- PyTorch distributed training acceleration framework☆53Updated 3 months ago
- ☆79Updated last year
- Models and examples built with OneFlow☆100Updated last year