Nicolas-BZRD / llm-recipesLinks
☆29Updated last year
Alternatives and similar repositories for llm-recipes
Users that are interested in llm-recipes are comparing it to the libraries listed below
Sorting:
- ☆10Updated 6 months ago
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆41Updated last year
- An Experiment on Dynamic NTK Scaling RoPE☆64Updated last year
- Code for paper "Patch-Level Training for Large Language Models"☆86Updated 8 months ago
- [ICLR 2024] CLEX: Continuous Length Extrapolation for Large Language Models☆78Updated last year
- Official PyTorch implementation of DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)☆225Updated 5 months ago
- Code for "SemDeDup", a simple method for identifying and removing semantic duplicates from a dataset (data pairs which are semantically s…☆139Updated last year
- [ICLR 2023] "Sparse MoE as the New Dropout: Scaling Dense and Self-Slimmable Transformers" by Tianlong Chen*, Zhenyu Zhang*, Ajay Jaiswal …☆53Updated 2 years ago
- ☆127Updated last year
- Simple Parameter-efficient Fine-tuning for Transformer-based Masked Language-models☆142Updated 2 years ago
- [ICLR 2025] MiniPLM: Knowledge Distillation for Pre-Training Language Models☆54Updated 8 months ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆52Updated 6 months ago
- The HELMET Benchmark☆163Updated 3 months ago
- Organize the Web: Constructing Domains Enhances Pre-Training Data Curation☆60Updated 3 months ago
- DQ-BART: Efficient Sequence-to-Sequence Model via Joint Distillation and Quantization (ACL 2022)☆50Updated 2 years ago
- [NeurIPS 2024 Main Track] Code for the paper titled "Instruction Tuning With Loss Over Instructions"☆38Updated last year
- ☆75Updated last year
- Layer-Condensed KV cache w/ 10 times larger batch size, fewer params and less computation. Dramatic speed up with better task performance…☆151Updated 4 months ago
- The official implementation of the paper "What Matters in Transformers? Not All Attention is Needed".☆174Updated 4 months ago
- A collection of instruction data and scripts for machine translation.☆20Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆147Updated 10 months ago
- ☆95Updated last year
- Code Implementation for "NASH: A Simple Unified Framework of Structured Pruning for Accelerating Encoder-Decoder Language Models" (EMNLP …☆16Updated last year
- This repository contains the joint use of CPO and SimPO method for better reference-free preference learning methods.☆56Updated last year
- Code for paper titled "Towards the Law of Capacity Gap in Distilling Language Models"☆102Updated last year
- Unofficial implementation of AlpaGasus☆92Updated last year
- Official implementation for 'Extending LLMs’ Context Window with 100 Samples'☆79Updated last year
- [ICML'24] The official implementation of “Rethinking Optimization and Architecture for Tiny Language Models”☆123Updated 6 months ago
- Spherical Merge Pytorch/HF format Language Models with minimal feature loss.☆135Updated last year
- Repository of the paper "Accelerating Transformer Inference for Translation via Parallel Decoding"☆119Updated last year