Nicolas-BZRD / llm-distillationLinks
☆10Updated 9 months ago
Alternatives and similar repositories for llm-distillation
Users that are interested in llm-distillation are comparing it to the libraries listed below
Sorting:
- ☆29Updated last year
- Code Implementation for "NASH: A Simple Unified Framework of Structured Pruning for Accelerating Encoder-Decoder Language Models" (EMNLP …☆16Updated 2 years ago
- Repository for "Propagating Knowledge Updates to LMs Through Distillation" (NeurIPS 2023).☆26Updated last year
- [ACL'24 Oral] Analysing The Impact of Sequence Composition on Language Model Pre-Training☆22Updated last year
- Code release for Dataless Knowledge Fusion by Merging Weights of Language Models (https://openreview.net/forum?id=FCnohuR6AnM)☆91Updated 2 years ago
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆55Updated 2 years ago
- Exploration of automated dataset selection approaches at large scales.☆48Updated 8 months ago
- Long Context Extension and Generalization in LLMs☆62Updated last year
- Adding new tasks to T0 without catastrophic forgetting☆33Updated 3 years ago
- ☆23Updated 3 weeks ago
- ☆39Updated last year
- ☆76Updated last year
- [NeurIPS 2024 Spotlight] Code and data for the paper "Finding Transformer Circuits with Edge Pruning".☆61Updated 3 months ago
- A Kernel-Based View of Language Model Fine-Tuning https://arxiv.org/abs/2210.05643☆78Updated 2 years ago
- Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding (EMNLP 2023 Long)☆64Updated last year
- Learning adapter weights from task descriptions☆19Updated 2 years ago
- Retrieval as Attention☆82Updated 2 years ago
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆44Updated last year
- Repo for the paper "Large Language Models Struggle to Learn Long-Tail Knowledge"☆78Updated 2 years ago
- Codebase for Instruction Following without Instruction Tuning☆36Updated last year
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆55Updated 9 months ago
- [ICML 2023] Exploring the Benefits of Training Expert Language Models over Instruction Tuning☆98Updated 2 years ago
- Organize the Web: Constructing Domains Enhances Pre-Training Data Curation☆69Updated 6 months ago
- [NeurIPS 2024 Main Track] Code for the paper titled "Instruction Tuning With Loss Over Instructions"☆39Updated last year
- Code for "Tracing Knowledge in Language Models Back to the Training Data"☆39Updated 2 years ago
- Code for the EMNLP24 paper "A simple and effective L2 norm based method for KV Cache compression."☆17Updated 11 months ago
- ☆54Updated last year
- The source code of our work "Prepacking: A Simple Method for Fast Prefilling and Increased Throughput in Large Language Models" [AISTATS …☆60Updated last year
- Benchmarking Benchmark Leakage in Large Language Models☆56Updated last year
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆44Updated last month