Nicolas-BZRD / llm-recipes
☆29Updated last year
Alternatives and similar repositories for llm-recipes:
Users that are interested in llm-recipes are comparing it to the libraries listed below
- ☆10Updated last month
- Code for "Everybody Prune Now: Structured Pruning of LLMs with only Forward Passes"☆27Updated last year
- Official PyTorch implementation of DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)☆202Updated 2 weeks ago
- An Experiment on Dynamic NTK Scaling RoPE☆62Updated last year
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal…☆48Updated 2 years ago
- DQ-BART: Efficient Sequence-to-Sequence Model via Joint Distillation and Quantization (ACL 2022)☆50Updated last year
- Repository for Sparse Finetuning of LLMs via modified version of the MosaicML llmfoundry☆40Updated last year
- Repo for the EMNLP'24 Paper "Dual-Space Knowledge Distillation for Large Language Models". A general white-box KD framework for both same…☆44Updated 4 months ago
- Are gradient information useful for pruning of LLMs?☆43Updated 11 months ago
- ☆76Updated 2 months ago
- Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models☆140Updated 2 years ago
- The HELMET Benchmark☆122Updated 2 weeks ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆76Updated last year
- Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding (EMNLP 2023 Long)☆57Updated 6 months ago
- [ICLR 2025] MiniPLM: Knowledge Distillation for Pre-Training Language Models☆34Updated 4 months ago
- Implementation of Speculative Sampling as described in "Accelerating Large Language Model Decoding with Speculative Sampling" by Deepmind☆91Updated last year
- Organize the Web: Constructing Domains Enhances Pre-Training Data Curation☆39Updated last month
- ☆125Updated last year
- Code for paper "Patch-Level Training for Large Language Models"☆81Updated 4 months ago
- This repository contains the joint use of CPO and SimPO method for better reference-free preference learning methods.☆52Updated 7 months ago
- [ICML 2024] Selecting High-Quality Data for Training Language Models☆158Updated 9 months ago
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆53Updated last week
- [ICML2024 Spotlight] Fine-Tuning Pre-trained Large Language Models Sparsely☆22Updated 9 months ago
- ☆48Updated 11 months ago
- [ICLR 2025] 🧬 RegMix: Data Mixture as Regression for Language Model Pre-training (Spotlight)☆117Updated last month
- Repository for "Propagating Knowledge Updates to LMs Through Distillation" (NeurIPS 2023).☆25Updated 7 months ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆46Updated last month
- [ACL 2024] Long-Context Language Modeling with Parallel Encodings☆153Updated 9 months ago
- Training code for Baby-Llama, our submission to the strict-small track of the BabyLM challenge.☆78Updated last year
- MultilingualSIFT: Multilingual Supervised Instruction Fine-tuning☆89Updated last year