bytedance / effective_transformerLinks
Running BERT without Padding
☆471Updated 3 years ago
Alternatives and similar repositories for effective_transformer
Users that are interested in effective_transformer are comparing it to the libraries listed below
Sorting:
- optimized BERT transformer inference on NVIDIA GPU. https://arxiv.org/abs/2210.03052☆473Updated last year
- Transformer related optimization, including BERT, GPT☆59Updated last year
- ☆411Updated last year
- ☆217Updated last year
- Zero Bubble Pipeline Parallelism☆395Updated 3 weeks ago
- A collection of memory efficient attention operators implemented in the Triton language.☆271Updated 11 months ago
- Microsoft Automatic Mixed Precision Library☆602Updated 8 months ago
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆267Updated 2 years ago
- Simple Dynamic Batching Inference☆145Updated 3 years ago
- Fast implementation of BERT inference directly on NVIDIA (CUDA, CUBLAS) and Intel MKL☆544Updated 4 years ago
- ☆118Updated last year
- ☆127Updated 5 months ago
- ☆138Updated last year
- Latency and Memory Analysis of Transformer Models for Training and Inference☆421Updated last month
- DeepLearning Framework Performance Profiling Toolkit☆285Updated 3 years ago
- A baseline repository of Auto-Parallelism in Training Neural Networks☆142Updated 2 years ago
- PyTorch bindings for CUTLASS grouped GEMM.☆122Updated 5 months ago
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆406Updated 2 weeks ago
- ☆194Updated 2 years ago
- HierarchicalKV is a part of NVIDIA Merlin and provides hierarchical key-value storage to meet RecSys requirements. The key capability of…☆146Updated last week
- Pipeline Parallelism for PyTorch☆766Updated 9 months ago
- Tutel MoE: Optimized Mixture-of-Experts Library, Support DeepSeek FP8/FP4☆829Updated this week
- Code repo for the paper "LLM-QAT Data-Free Quantization Aware Training for Large Language Models"☆287Updated 2 months ago
- Scalable PaLM implementation of PyTorch☆189Updated 2 years ago
- Official repository for DistFlashAttn: Distributed Memory-efficient Attention for Long-context LLMs Training☆208Updated 9 months ago
- [ICML'21 Oral] I-BERT: Integer-only BERT Quantization☆247Updated 2 years ago
- USP: Unified (a.k.a. Hybrid, 2D) Sequence Parallel Attention for Long Context Transformers Model Training and Inference☆506Updated last week
- a fast and user-friendly runtime for transformer inference (Bert, Albert, GPT2, Decoders, etc) on CPU and GPU.☆1,522Updated last month
- A high-performance framework for training wide-and-deep recommender systems on heterogeneous cluster☆158Updated last year
- This repository contains integer operators on GPUs for PyTorch.☆205Updated last year