raymin0223 / fast_robust_early_exit
Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding (EMNLP 2023 Long)
☆52Updated last month
Related projects ⓘ
Alternatives and complementary repositories for fast_robust_early_exit
- Long Context Extension and Generalization in LLMs☆39Updated last month
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆44Updated last year
- Official PyTorch implementation of DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)☆133Updated last month
- Official repository of "Distort, Distract, Decode: Instruction-Tuned Model Can Refine its Response from Noisy Instructions", ICLR 2024 Sp…☆19Updated 8 months ago
- ☆42Updated 5 months ago
- The source code of "Merging Experts into One: Improving Computational Efficiency of Mixture of Experts (EMNLP 2023)":☆34Updated 7 months ago
- Code for "Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes"☆28Updated 7 months ago
- ☆107Updated 3 months ago
- ☆46Updated last year
- Repo for ACL2023 Findings paper "Emergent Modularity in Pre-trained Transformers"☆19Updated last year
- [NeurIPS'23] Speculative Decoding with Big Little Decoder☆85Updated 9 months ago
- Repo for the EMNLP'24 Paper "Dual-Space Knowledge Distillation for Large Language Models".☆36Updated this week
- Ouroboros: Speculative Decoding with Large Model Enhanced Drafting (EMNLP 2024 main)☆74Updated 3 weeks ago
- A Closer Look into Mixture-of-Experts in Large Language Models☆38Updated 3 months ago
- SWIFT: On-the-Fly Self-Speculative Decoding for LLM Inference Acceleration☆22Updated 3 weeks ago
- The official implementation of the paper "Demystifying the Compression of Mixture-of-Experts Through a Unified Framework".☆48Updated 2 weeks ago
- [EMNLP 2023]Context Compression for Auto-regressive Transformers with Sentinel Tokens☆21Updated last year
- This repo contains the source code for: Model Tells You What to Discard: Adaptive KV Cache Compression for LLMs☆32Updated 2 months ago
- ☆31Updated 2 months ago
- Repository for "Propagating Knowledge Updates to LMs Through Distillation" (NeurIPS 2023).☆24Updated 2 months ago
- The Efficiency Spectrum of LLM☆52Updated 11 months ago
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆111Updated last week
- Official implementation for Yuan & Liu & Zhong et al., KV Cache Compression, But What Must We Give in Return? A Comprehensive Benchmark o…☆46Updated 3 weeks ago
- ☆31Updated 2 months ago
- Multi-Candidate Speculative Decoding☆28Updated 6 months ago
- A curated list of awesome resources dedicated to Scaling Laws for LLMs☆63Updated last year
- AutoPEFT: Automatic Configuration Search for Parameter-Efficient Fine-Tuning (Zhou et al.; TACL)☆42Updated 7 months ago
- Official Implementation of SLEB: Streamlining LLMs through Redundancy Verification and Elimination of Transformer Blocks☆29Updated 4 months ago
- Codes for our paper "Speculative Decoding: Exploiting Speculative Execution for Accelerating Seq2seq Generation" (EMNLP 2023 Findings)☆33Updated 11 months ago
- Repository of the paper "Accelerating Transformer Inference for Translation via Parallel Decoding"☆108Updated 7 months ago