NetEase-FuXi / EETLinks
Easy and Efficient Transformer : Scalable Inference Solution For Large NLP model
☆265Updated 10 months ago
Alternatives and similar repositories for EET
Users that are interested in EET are comparing it to the libraries listed below
Sorting:
- ☆220Updated 2 years ago
- Running BERT without Padding☆475Updated 3 years ago
- ParaGen is a PyTorch deep learning framework for parallel sequence generation.☆186Updated 2 years ago
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆408Updated 2 months ago
- Efficient Inference for Big Models☆588Updated 2 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆69Updated 2 years ago
- Fast implementation of BERT inference directly on NVIDIA (CUDA, CUBLAS) and Intel MKL☆546Updated 4 years ago
- A unified tokenization tool for Images, Chinese and English.☆151Updated 2 years ago
- Code for CPM-2 Pre-Train☆158Updated 2 years ago
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆268Updated 2 years ago
- ☆412Updated last year
- A PyTorch-based model pruning toolkit for pre-trained language models☆390Updated 2 years ago
- Simple Dynamic Batching Inference☆146Updated 3 years ago
- Transformer related optimization, including BERT, GPT☆39Updated 2 years ago
- Finetune CPM-1☆75Updated 2 years ago
- Pretrain CPM-1☆53Updated 4 years ago
- Introduction to CPM☆166Updated 4 years ago
- Model Compression for Big Models☆165Updated 2 years ago
- Best practice for training LLaMA models in Megatron-LM☆661Updated last year
- OneFlow models for benchmarking.☆104Updated last year
- Transformer related optimization, including BERT, GPT☆59Updated 2 years ago
- Scalable PaLM implementation of PyTorch☆190Updated 2 years ago
- A more efficient GLM implementation!☆54Updated 2 years ago
- Code used for sourcing and cleaning the BigScience ROOTS corpus☆314Updated 2 years ago
- TensorFlow code and pre-trained models for BERT☆116Updated 5 years ago
- ⛵️The official PyTorch implementation for "BERT-of-Theseus: Compressing BERT by Progressive Module Replacing" (EMNLP 2020).☆315Updated 2 years ago
- ☆255Updated 2 years ago
- 本项目旨在对大量文本文件进行快速编码检测和转换以辅助mnbvc语料集项目的数据清洗工作☆64Updated 3 weeks ago
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆224Updated last year
- Examples of training models with hybrid parallelism using ColossalAI☆340Updated 2 years ago