teelinsan / parallel-decoding
Repository of the paper "Accelerating Transformer Inference for Translation via Parallel Decoding"
☆114Updated 10 months ago
Alternatives and similar repositories for parallel-decoding:
Users that are interested in parallel-decoding are comparing it to the libraries listed below
- ☆98Updated 10 months ago
- Sparse Backpropagation for Mixture-of-Expert Training☆27Updated 6 months ago
- Code for "Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes"☆27Updated 9 months ago
- ☆48Updated 8 months ago
- Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding (EMNLP 2023 Long)☆56Updated 3 months ago
- Code for paper "Diffusion Language Models Can Perform Many Tasks with Scaling and Instruction-Finetuning"☆65Updated 11 months ago
- ☆135Updated last year
- ☆107Updated 3 months ago
- ☆36Updated 4 months ago
- Triton-based implementation of Sparse Mixture of Experts.☆192Updated last month
- Stick-breaking attention☆41Updated this week
- Triton implementation of FlashAttention2 that adds Custom Masks.☆88Updated 5 months ago
- Official github repo for the paper "Compression Represents Intelligence Linearly" [COLM 2024]☆130Updated 3 months ago
- [NeurIPS'23] Speculative Decoding with Big Little Decoder☆88Updated 11 months ago
- Simple and efficient pytorch-native transformer training and inference (batched)☆66Updated 9 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆145Updated last month
- Repo for ACL2023 Findings paper "Emergent Modularity in Pre-trained Transformers"☆21Updated last year
- Official repository for LightSeq: Sequence Level Parallelism for Distributed Training of Long Context Transformers☆204Updated 4 months ago
- [ICLR 2024 Spotlight] Code for the paper "Merge, Then Compress: Demystify Efficient SMoE with Hints from Its Routing Policy"☆70Updated 7 months ago
- The Efficiency Spectrum of LLM☆52Updated last year
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆34Updated 9 months ago
- Implementation of NAACL 2024 Outstanding Paper "LM-Infinite: Simple On-the-Fly Length Generalization for Large Language Models"☆135Updated 3 months ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆48Updated last year
- Low-bit optimizers for PyTorch☆125Updated last year
- Language models scale reliably with over-training and on downstream tasks☆96Updated 9 months ago
- ☆124Updated 11 months ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆75Updated 10 months ago
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆75Updated 3 months ago
- ☆118Updated 5 months ago
- ☆83Updated 7 months ago