TsinghuaAI / TDSLinks
A plug-in of Microsoft DeepSpeed to fix the bug of DeepSpeed pipeline
β25Updated 4 years ago
Alternatives and similar repositories for TDS
Users that are interested in TDS are comparing it to the libraries listed below
Sorting:
- Pretrain CPM-1β52Updated 4 years ago
- Princeton NLP's pre-training library based on fairseq with DeepSpeed kernel integration πβ115Updated 3 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2β69Updated 2 years ago
- Introduction to CPMβ165Updated 4 years ago
- Scalable PaLM implementation of PyTorchβ190Updated 3 years ago
- Must-read papers on improving efficiency for pre-trained language models.β105Updated 3 years ago
- BANG is a new pretraining model to Bridge the gap between Autoregressive (AR) and Non-autoregressive (NAR) Generation. AR and NAR generatβ¦β28Updated 3 years ago
- ι¦δΎ¬η§ζοΌεδΊ¬ι¦δΎ¬ζ §θ―η§ζζιθ΄£δ»»ε ¬εΈοΌη₯δΉηζε€δ»½β43Updated 5 years ago
- β45Updated 4 years ago
- [ACL 2022] Structured Pruning Learns Compact and Accurate Models https://arxiv.org/abs/2204.00408β198Updated 2 years ago
- [EMNLP 2022] Training Language Models with Memory Augmentation https://arxiv.org/abs/2205.12674β195Updated 2 years ago
- DeeBERT: Dynamic Early Exiting for Accelerating BERT Inferenceβ161Updated 3 years ago
- ParaGen is a PyTorch deep learning framework for parallel sequence generation.β185Updated 3 years ago
- Notes of my introduction about NLP in Fudan Universityβ37Updated 4 years ago
- Code associated with the paper **SkipBERT: Efficient Inference with Shallow Layer Skipping**, at ACL 2022β16Updated 3 years ago
- β54Updated 3 years ago
- β46Updated 4 years ago
- JsonTuning: Towards Generalizable, Robust, and Controllable Instruction Tuningβ10Updated last year
- reStructured Pre-trainingβ99Updated 3 years ago
- Method to improve inference time for BERT. This is an implementation of the paper titled "PoWER-BERT: Accelerating BERT Inference via Proβ¦β62Updated 4 months ago
- Source code for NAACL 2021 paper "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference"β48Updated 3 years ago
- Codes for our paper "Speculative Decoding: Exploiting Speculative Execution for Accelerating Seq2seq Generation" (EMNLP 2023 Findings)β46Updated 2 years ago
- β59Updated 2 years ago
- Train llm (bloom, llama, baichuan2-7b, chatglm3-6b) with deepspeed pipeline mode. Faster than zero/zero++/fsdp.β98Updated last year
- [KDD'22] Learned Token Pruning for Transformersβ102Updated 2 years ago
- Code for the paper "A Theoretical Analysis of the Repetition Problem in Text Generation" in AAAI 2021.β57Updated 3 years ago
- β105Updated 2 years ago
- Paradigm shift in natural language processingβ42Updated 3 years ago
- Inference framework for MoE layers based on TensorRT with Python bindingβ41Updated 4 years ago
- Source code of paper "BP-Transformer: Modelling Long-Range Context via Binary Partitioning"β127Updated 4 years ago