Nicolas-BZRD / llm-distillationLinks
☆10Updated 6 months ago
Alternatives and similar repositories for llm-distillation
Users that are interested in llm-distillation are comparing it to the libraries listed below
Sorting:
- ☆29Updated last year
- Repository for "Propagating Knowledge Updates to LMs Through Distillation" (NeurIPS 2023).☆26Updated 11 months ago
- ☆75Updated last year
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆22Updated 11 months ago
- Code release for Dataless Knowledge Fusion by Merging Weights of Language Models (https://openreview.net/forum?id=FCnohuR6AnM)☆89Updated 2 years ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆53Updated 2 years ago
- Code Implementation for "NASH: A Simple Unified Framework of Structured Pruning for Accelerating Encoder-Decoder Language Models" (EMNLP …☆16Updated last year
- Learning adapter weights from task descriptions☆19Updated last year
- Long Context Extension and Generalization in LLMs☆58Updated 10 months ago
- Official code repo for paper "Great Memory, Shallow Reasoning: Limits of kNN-LMs"☆23Updated 3 months ago
- ☆39Updated last year
- ☆39Updated last year
- Organize the Web: Constructing Domains Enhances Pre-Training Data Curation☆60Updated 3 months ago
- ☆21Updated 3 months ago
- Codebase for Hyperdecoders https://arxiv.org/abs/2203.08304☆12Updated 2 years ago
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆41Updated last year
- Lightweight tool to identify Data Contamination in LLMs evaluation☆51Updated last year
- Language models scale reliably with over-training and on downstream tasks☆97Updated last year
- Code for "Tracing Knowledge in Language Models Back to the Training Data"☆38Updated 2 years ago
- ☆65Updated last year
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆52Updated 6 months ago
- Adding new tasks to T0 without catastrophic forgetting☆33Updated 2 years ago
- Repo for the paper "Large Language Models Struggle to Learn Long-Tail Knowledge"☆77Updated 2 years ago
- Repo for ICML23 "Why do Nearest Neighbor Language Models Work?"☆58Updated 2 years ago
- Codebase for Instruction Following without Instruction Tuning☆35Updated 10 months ago
- ☆49Updated last year
- [ICML 2023] Exploring the Benefits of Training Expert Language Models over Instruction Tuning☆99Updated 2 years ago
- ☆26Updated last year
- Exploration of automated dataset selection approaches at large scales.☆47Updated 5 months ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆44Updated last year