bytedance / effective_transformerLinks
Running BERT without Padding
☆480Updated 3 years ago
Alternatives and similar repositories for effective_transformer
Users that are interested in effective_transformer are comparing it to the libraries listed below
Sorting:
- Transformer related optimization, including BERT, GPT☆59Updated 2 years ago
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆477Updated last year
- ☆219Updated 2 years ago
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆271Updated 2 years ago
- ☆413Updated 2 years ago
- DeepLearning Framework Performance Profiling Toolkit☆296Updated 3 years ago
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆405Updated 6 months ago
- Zero Bubble Pipeline Parallelism☆449Updated 9 months ago
- A collection of memory efficient attention operators implemented in the Triton language.☆287Updated last year
- Simple Dynamic Batching Inference☆145Updated 3 years ago
- ☆130Updated last year
- ☆141Updated last year
- ☆125Updated last year
- Microsoft Automatic Mixed Precision Library☆635Updated 2 months ago
- PyTorch bindings for CUTLASS grouped GEMM.☆184Updated last month
- Fast implementation of BERT inference directly on NVIDIA (CUDA, CUBLAS) and Intel MKL☆547Updated 5 years ago
- a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.☆1,542Updated 6 months ago
- Easy and Efficient Transformer : Scalable Inference Solution For Large NLP model☆265Updated last year
- Best practice for training LLaMA models in Megatron-LM☆664Updated 2 years ago
- Transformer related optimization, including BERT, GPT☆39Updated 3 years ago
- A Fast Muti-processing BERT-Inference System☆102Updated 3 years ago
- Latency and Memory Analysis of Transformer Models for Training and Inference☆478Updated 9 months ago
- OneFlow models for benchmarking.☆104Updated last year
- Performance of the C++ interface of flash attention and flash attention v2 in large language model (LLM) inference scenarios.☆44Updated 11 months ago
- ☆79Updated 2 years ago
- ☆22Updated 2 years ago
- LLM training technologies developed by kwai☆70Updated 3 weeks ago
- ☆192Updated 2 years ago
- Models and examples built with OneFlow☆101Updated last year
- GPTQ inference Triton kernel☆321Updated 2 years ago