Nicolas-BZRD / llm-recipesLinks
☆29Updated last year
Alternatives and similar repositories for llm-recipes
Users that are interested in llm-recipes are comparing it to the libraries listed below
Sorting:
- ☆10Updated 5 months ago
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆42Updated last year
- Official PyTorch implementation of DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)☆224Updated 4 months ago
- Code for paper "Patch-Level Training for Large Language Models"☆85Updated 8 months ago
- Code for "SemDeDup", a simple method for identifying and removing semantic duplicates from a dataset (data pairs which are semantically s…☆138Updated last year
- [ICLR 2025] MiniPLM: Knowledge Distillation for Pre-Training Language Models☆50Updated 7 months ago
- An Experiment on Dynamic NTK Scaling RoPE☆64Updated last year
- Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models☆142Updated 2 years ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Updated last year
- Code for paper titled "Towards the Law of Capacity Gap in Distilling Language Models"☆100Updated last year
- DQ-BART: Efficient Sequence-to-Sequence Model via Joint Distillation and Quantization (ACL 2022)☆50Updated 2 years ago
- This repository contains the joint use of CPO and SimPO method for better reference-free preference learning methods.☆53Updated 11 months ago
- Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding (EMNLP 2023 Long)☆60Updated 9 months ago
- Are gradient information useful for pruning of LLMs?☆46Updated last year
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆52Updated 2 years ago
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆122Updated 6 months ago
- Repository of the paper "Accelerating Transformer Inference for Translation via Parallel Decoding"☆118Updated last year
- Unofficial implementation of AlpaGasus☆92Updated last year
- Code Implementation for "NASH: A Simple Unified Framework of Structured Pruning for Accelerating Encoder-Decoder Language Models" (EMNLP …☆16Updated last year
- The official implementation of the paper "What Matters in Transformers? Not All Attention is Needed".☆174Updated 3 months ago
- DSIR large-scale data selection framework for language model training☆252Updated last year
- Training code for Baby-Llama, our submission to the strict-small track of the BabyLM challenge.☆81Updated last year
- Code for Zero-Shot Tokenizer Transfer☆133Updated 6 months ago
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆79Updated last year
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆98Updated 9 months ago
- Are Intermediate Layers and Labels Really Necessary? A General Language Model Distillation Method ; GKD: A General Knowledge Distillation…☆32Updated last year
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆52Updated 5 months ago
- ☆223Updated last year
- [ICML2024 Spotlight] Fine-Tuning Pre-trained Large Language Models Sparsely☆23Updated last year
- The HELMET Benchmark☆156Updated 3 months ago