TsinghuaAI / TDS
A plug-in of Microsoft DeepSpeed to fix the bug of DeepSpeed pipeline
☆26Updated 3 years ago
Alternatives and similar repositories for TDS:
Users that are interested in TDS are comparing it to the libraries listed below
- Pretrain CPM-1☆51Updated 3 years ago
- Notes of my introduction about NLP in Fudan University☆37Updated 3 years ago
- Source code for NAACL 2021 paper "TR-BERT: Dynamic Token Reduction for Accelerating BERT Inference"☆44Updated 2 years ago
- BANG is a new pretraining model to Bridge the gap between Autoregressive (AR) and Non-autoregressive (NAR) Generation. AR and NAR generat…☆28Updated 2 years ago
- Must-read papers on improving efficiency for pre-trained language models.☆102Updated 2 years ago
- Method to improve inference time for BERT. This is an implementation of the paper titled "PoWER-BERT: Accelerating BERT Inference via Pro…☆59Updated last year
- ☆53Updated 2 years ago
- Inference framework for MoE layers based on TensorRT with Python binding☆41Updated 3 years ago
- Ongoing research training transformer language models at scale, including: BERT & GPT-2☆69Updated last year
- Code for the paper "A Theoretical Analysis of the Repetition Problem in Text Generation" in AAAI 2021.☆51Updated 2 years ago
- A pre-trained model with multi-exit transformer architecture.☆53Updated 2 years ago
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)☆84Updated 3 months ago
- JsonTuning: Towards Generalizable, Robust, and Controllable Instruction Tuning☆10Updated 2 months ago
- CFBench: A Comprehensive Constraints-Following Benchmark for LLMs☆27Updated 5 months ago
- ☆43Updated 3 years ago
- Pytorch implementation of paper "Efficient Nearest Neighbor Language Models" (EMNLP 2021)☆71Updated 3 years ago
- Princeton NLP's pre-training library based on fairseq with DeepSpeed kernel integration 🚃☆113Updated 2 years ago
- Source code for paper: Knowledge Inheritance for Pre-trained Language Models☆38Updated 2 years ago
- Codes for our paper "Speculative Decoding: Exploiting Speculative Execution for Accelerating Seq2seq Generation" (EMNLP 2023 Findings)☆37Updated last year
- Code for ACL 2023 paper titled "Lifting the Curse of Capacity Gap in Distilling Language Models"☆28Updated last year
- Paradigm shift in natural language processing☆42Updated 2 years ago
- ☆94Updated 4 months ago
- ☆59Updated last year
- Odysseus: Playground of LLM Sequence Parallelism☆64Updated 7 months ago
- 本文旨在整理文本生成领域国内外工业界和企业家的研究者和研究机构。排名不分先后。更新中,欢迎大家补充☆48Updated 4 years ago
- Repository of LV-Eval Benchmark☆58Updated 5 months ago
- ☆30Updated last year
- The official repository for the paper "From Zero to Hero: Examining the Power of Symbolic Tasks in Instruction Tuning".☆63Updated last year
- ☆106Updated last year
- ☆37Updated 2 years ago