google-research / distilling-step-by-stepView external linksLinks
☆580Sep 7, 2023Updated 2 years ago
Alternatives and similar repositories for distilling-step-by-step
Users that are interested in distilling-step-by-step are comparing it to the libraries listed below
Sorting:
- [EMNLP 2023] Lion: Adversarial Distillation of Proprietary Large Language Models☆212Feb 11, 2024Updated 2 years ago
- This repository collects papers for "A Survey on Knowledge Distillation of Large Language Models". We break down KD into Knowledge Elicit…☆1,252Mar 9, 2025Updated 11 months ago
- ☆43Aug 23, 2023Updated 2 years ago
- Official PyTorch implementation of DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)☆250Mar 13, 2025Updated 11 months ago
- General technology for enabling AI capabilities w/ LLMs and MLLMs☆4,284Dec 22, 2025Updated last month
- An Open Source Toolkit For LLM Distillation☆860Dec 21, 2025Updated last month
- Large Language Models Are Reasoning Teachers (ACL 2023)☆345Mar 7, 2025Updated 11 months ago
- Less is More: Task-aware Layer-wise Distillation for Language Model Compression (ICML2023)☆40Aug 28, 2023Updated 2 years ago
- Best practices for distilling large language models.☆604Feb 1, 2024Updated 2 years ago
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆640Mar 4, 2024Updated last year
- 🤗 PEFT: State-of-the-art Parameter-Efficient Fine-Tuning.☆20,619Updated this week
- A simple and effective LLM pruning approach.☆848Aug 9, 2024Updated last year
- ☆294Dec 20, 2023Updated 2 years ago
- AllenAI's post-training codebase☆3,573Updated this week
- Accessible large language models via k-bit quantization for PyTorch.☆7,952Updated this week
- [MLSys 2024 Best Paper Award] AWQ: Activation-aware Weight Quantization for LLM Compression and Acceleration☆3,436Jul 17, 2025Updated 6 months ago
- ☆28Mar 5, 2024Updated last year
- Reformatted Alignment☆111Sep 23, 2024Updated last year
- Robust recipes to align language models with human and AI preferences☆5,495Sep 8, 2025Updated 5 months ago
- Tools for merging pretrained large language models.☆6,783Jan 26, 2026Updated 2 weeks ago
- [ACL 2025 Findings] Official implementation of the paper "Unveiling the Key Factors for Distilling Chain-of-Thought Reasoning".☆20Feb 26, 2025Updated 11 months ago
- ☆554Jan 2, 2025Updated last year
- An easy-to-use LLMs quantization package with user-friendly apis, based on GPTQ algorithm.☆5,028Apr 11, 2025Updated 10 months ago
- Train transformer language models with reinforcement learning.☆17,360Updated this week
- [ACL 2024] LLM2LLM: Boosting LLMs with Novel Iterative Data Enhancement☆193Mar 25, 2024Updated last year
- Foundation Architecture for (M)LLMs☆3,130Apr 11, 2024Updated last year
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,188Jul 11, 2024Updated last year
- Training code for Baby-Llama, our submission to the strict-small track of the BabyLM challenge.☆85Oct 18, 2023Updated 2 years ago
- The TinyLlama project is an open endeavor to pretrain a 1.1B Llama model on 3 trillion tokens.☆8,891May 3, 2024Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,835Jun 10, 2024Updated last year
- Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".☆2,254Mar 27, 2024Updated last year
- [ICML 2024] Break the Sequential Dependency of LLM Inference Using Lookahead Decoding☆1,316Mar 6, 2025Updated 11 months ago
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆15Oct 16, 2023Updated 2 years ago
- ☆13Jan 22, 2025Updated last year
- Offical code repository for PromptMix: A Class Boundary Augmentation Method for Large Language Model Distillation, EMNLP 2023☆12Dec 13, 2023Updated 2 years ago
- [NeurIPS 2023] LLM-Pruner: On the Structural Pruning of Large Language Models. Support Llama-3/3.1, Llama-2, LLaMA, BLOOM, Vicuna, Baich…☆1,106Oct 7, 2024Updated last year
- Code for the ICML 2023 paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot".☆871Aug 20, 2024Updated last year
- Beyond KV Caching: Shared Attention for Efficient LLMs☆20Jul 19, 2024Updated last year
- Reproduction of "RLCD Reinforcement Learning from Contrast Distillation for Language Model Alignment☆69Aug 18, 2023Updated 2 years ago