NetEase-FuXi / EETLinks
Easy and Efficient Transformer : Scalable Inference Solution For Large NLP model
☆265Updated 10 months ago
Alternatives and similar repositories for EET
Users that are interested in EET are comparing it to the libraries listed below
Sorting:
- ☆219Updated 2 years ago
- Running BERT without Padding☆475Updated 3 years ago
- A unified tokenization tool for Images, Chinese and English.☆151Updated 2 years ago
- ParaGen is a PyTorch deep learning framework for parallel sequence generation.☆185Updated 2 years ago
- Fast implementation of BERT inference directly on NVIDIA (CUDA, CUBLAS) and Intel MKL☆546Updated 4 years ago
- Efficient Inference for Big Models☆588Updated 2 years ago
- LiBai(李白): A Toolbox for Large-Scale Distributed Parallel Training☆407Updated 2 months ago
- Code for CPM-2 Pre-Train☆158Updated 2 years ago
- A PyTorch-based model pruning toolkit for pre-trained language models☆390Updated 2 years ago
- Transformer related optimization, including BERT, GPT☆59Updated 2 years ago
- Introduction to CPM☆166Updated 4 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆68Updated 2 years ago
- Simple Dynamic Batching Inference☆145Updated 3 years ago
- Finetune CPM-1☆75Updated 2 years ago
- Best practice for training LLaMA models in Megatron-LM☆660Updated last year
- ☆413Updated last year
- Easy Parallel Library (EPL) is a general and efficient deep learning framework for distributed model training.☆269Updated 2 years ago
- Pretrain CPM-1☆53Updated 4 years ago
- Transformer related optimization, including BERT, GPT☆39Updated 2 years ago
- Scalable PaLM implementation of PyTorch☆188Updated 2 years ago
- Efficient Training (including pre-training and fine-tuning) for Big Models☆611Updated last month
- ⛵️The official PyTorch implementation for "BERT-of-Theseus: Compressing BERT by Progressive Module Replacing" (EMNLP 2020).☆315Updated 2 years ago
- Model Compression for Big Models☆165Updated 2 years ago
- Code used for sourcing and cleaning the BigScience ROOTS corpus☆316Updated 2 years ago
- ☆219Updated 2 years ago
- train llama on a single A100 80G node using 🤗 transformers and 🚀 Deepspeed Pipeline Parallelism☆224Updated last year
- ☆79Updated last year
- ☆254Updated 3 years ago
- This is a text generation method which returns a generator, streaming out each token in real-time during inference, based on Huggingface/…☆97Updated last year
- Large-scale model inference.☆631Updated 2 years ago