NetEase-FuXi / EETLinks
Easy and Efficient Transformer : Scalable Inference Solution For Large NLP model
☆265Updated 9 months ago
Alternatives and similar repositories for EET
Users that are interested in EET are comparing it to the libraries listed below
Sorting:
- ☆220Updated 2 years ago
- Running BERT without Padding☆475Updated 3 years ago
- ParaGen is a PyTorch deep learning framework for parallel sequence generation.☆186Updated 2 years ago
- Fast implementation of BERT inference directly on NVIDIA (CUDA, CUBLAS) and Intel MKL☆546Updated 4 years ago
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆408Updated last month
- A unified tokenization tool for Images, Chinese and English.☆151Updated 2 years ago
- Efficient Inference for Big Models☆588Updated 2 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆69Updated 2 years ago
- ☆412Updated last year
- Code for CPM-2 Pre-Train☆158Updated 2 years ago
- Pretrain CPM-1☆53Updated 4 years ago
- Transformer related optimization, including BERT, GPT☆39Updated 2 years ago
- Introduction to CPM☆166Updated 3 years ago
- Simple Dynamic Batching Inference☆145Updated 3 years ago
- Transformer related optimization, including BERT, GPT☆59Updated last year
- Best practice for training LLaMA models in Megatron-LM☆661Updated last year
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆268Updated 2 years ago
- Code for paper "Vocabulary Learning via Optimal Transport for Neural Machine Translation"☆441Updated 3 years ago
- A PyTorch-based model pruning toolkit for pre-trained language models☆388Updated 2 years ago
- Finetune CPM-1☆75Updated 2 years ago
- ☆168Updated 3 years ago
- ☆79Updated last year
- ☆253Updated 2 years ago
- Code used for sourcing and cleaning the BigScience ROOTS corpus☆314Updated 2 years ago
- Efficient Training (including pre-training and fine-tuning) for Big Models☆606Updated 2 weeks ago
- ⛵️The official PyTorch implementation for "BERT-of-Theseus: Compressing BERT by Progressive Module Replacing" (EMNLP 2020).☆315Updated 2 years ago
- A Fast Muti-processing BERT-Inference System☆101Updated 2 years ago
- TensorFlow code and pre-trained models for BERT☆116Updated 5 years ago
- OneFlow models for benchmarking.☆104Updated last year
- Models and examples built with OneFlow☆99Updated 10 months ago