jongwooko / distillmLinks
Official PyTorch implementation of DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)
☆244Updated 9 months ago
Alternatives and similar repositories for distillm
Users that are interested in distillm are comparing it to the libraries listed below
Sorting:
- Explorations into some recent techniques surrounding speculative decoding☆295Updated 11 months ago
- ☆235Updated last year
- ☆272Updated 2 years ago
- Implementation of Speculative Sampling as described in "Accelerating Large Language Model Decoding with Speculative Sampling" by Deepmind☆107Updated last year
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆446Updated last year
- DSIR large-scale data selection framework for language model training☆266Updated last year
- Official PyTorch implementation of QA-LoRA☆145Updated last year
- A family of compressed models obtained via pruning and knowledge distillation☆361Updated last month
- Training code for Baby-Llama, our submission to the strict-small track of the BabyLM challenge.☆85Updated 2 years ago
- [EMNLP 2023] Adapting Language Models to Compress Long Contexts☆321Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆148Updated last year
- Unofficial implementation for the paper "Mixture-of-Depths: Dynamically allocating compute in transformer-based language models"☆176Updated last year
- REST: Retrieval-Based Speculative Decoding, NAACL 2024☆212Updated 3 months ago
- Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding (EMNLP 2023 Long)☆64Updated last year
- The official implementation of the paper "What Matters in Transformers? Not All Attention is Needed".☆185Updated last month
- ☆128Updated last year
- Repo for the EMNLP'24 Paper "Dual-Space Knowledge Distillation for Large Language Models". A general white-box KD framework for both same…☆60Updated 3 months ago
- Repo for Rho-1: Token-level Data Selection & Selective Pretraining of LLMs.☆452Updated last year
- The repo for In-context Autoencoder☆156Updated last year
- Codes for the paper "∞Bench: Extending Long Context Evaluation Beyond 100K Tokens": https://arxiv.org/abs/2402.13718☆361Updated last year
- Homepage for ProLong (Princeton long-context language models) and paper "How to Train Long-Context Language Models (Effectively)"☆241Updated 3 months ago
- Code and Data for "Long-context LLMs Struggle with Long In-context Learning" [TMLR2025]☆110Updated 10 months ago
- Code associated with the paper **Draft & Verify: Lossless Large Language Model Acceleration via Self-Speculative Decoding**☆215Updated 10 months ago
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆167Updated last year
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆632Updated last year
- ☆142Updated last year
- This PyTorch package implements MoEBERT: from BERT to Mixture-of-Experts via Importance-Guided Adaptation (NAACL 2022).☆112Updated 3 years ago
- [EMNLP 2024] LongAlign: A Recipe for Long Context Alignment of LLMs☆257Updated last year
- Implementation of the LongRoPE: Extending LLM Context Window Beyond 2 Million Tokens Paper☆152Updated last year
- [ICLR 2025] 🧬 RegMix: Data Mixture as Regression for Language Model Pre-training (Spotlight)☆181Updated 10 months ago