An Open Source Toolkit For LLM Distillation
☆891Mar 14, 2026Updated last week
Alternatives and similar repositories for DistillKit
Users that are interested in DistillKit are comparing it to the libraries listed below
Sorting:
- A pipeline for LLM knowledge distillation☆113Apr 2, 2025Updated 11 months ago
- Easy to use, High Performant Knowledge Distillation for LLMs☆96May 5, 2025Updated 10 months ago
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆263Apr 23, 2024Updated last year
- Tools for merging pretrained large language models.☆6,867Mar 15, 2026Updated last week
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆253Oct 30, 2024Updated last year
- ☆142Aug 20, 2025Updated 7 months ago
- Official PyTorch implementation of DistiLLM: Towards Streamlined Distillation for Large Language Models (ICML 2024)☆254Mar 13, 2025Updated last year
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆3,131Updated this week
- ☆56Nov 6, 2024Updated last year
- Best practices for distilling large language models.☆610Feb 1, 2024Updated 2 years ago
- This repository collects papers for "A Survey on Knowledge Distillation of Large Language Models". We break down KD into Knowledge Elicit…☆1,265Mar 9, 2025Updated last year
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆833Mar 17, 2025Updated last year
- Go ahead and axolotl questions☆11,460Updated this week
- This is our own implementation of 'Layer Selective Rank Reduction'☆240May 26, 2024Updated last year
- Minimalistic large language model 3D-parallelism training☆2,617Feb 19, 2026Updated last month
- A family of compressed models obtained via pruning and knowledge distillation☆369Nov 6, 2025Updated 4 months ago
- AllenAI's post-training codebase☆3,629Updated this week
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,956Updated this week
- [ICLR 2025] MiniPLM: Knowledge Distillation for Pre-Training Language Models☆73Nov 23, 2024Updated last year
- Efficient Triton Kernels for LLM Training☆6,216Updated this week
- Multi-LoRA inference server that scales to 1000s of fine-tuned LLMs☆3,739May 21, 2025Updated 10 months ago
- Library for model distillation☆165Sep 6, 2025Updated 6 months ago
- Examples for using the SiLLM framework for training and running Large Language Models (LLMs) on Apple Silicon☆16May 8, 2025Updated 10 months ago
- Official implementation of Half-Quadratic Quantization (HQQ)☆919Feb 26, 2026Updated 3 weeks ago
- Curated list of datasets and tools for post-training.☆4,344Mar 9, 2026Updated last week
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,234May 8, 2024Updated last year
- OpenDiLoCo: An Open-Source Framework for Globally Distributed Low-Communication Training☆560Jan 13, 2025Updated last year
- Robust recipes to align language models with human and AI preferences☆5,527Sep 8, 2025Updated 6 months ago
- A framework for few-shot evaluation of language models.☆11,704Mar 5, 2026Updated 2 weeks ago
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆2,339Mar 9, 2026Updated last week
- Automatically evaluate your LLMs in Google Colab☆687May 7, 2024Updated last year
- A comprehensive repository of reasoning tasks for LLMs (and beyond)☆465Sep 27, 2024Updated last year
- SGLang is a high-performance serving framework for large language models and multimodal models.☆24,829Updated this week
- Synthetic data curation for post-training and structured data extraction☆1,646Updated this week
- Official homepage for "Self-Harmonized Chain of Thought" (NAACL 2025)☆92Jan 23, 2025Updated last year
- Optimizing inference proxy for LLMs☆3,381Jan 28, 2026Updated last month
- Fast Multimodal Semantic Deduplication & Filtering☆897Jan 20, 2026Updated 2 months ago
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆2,891Updated this week
- ☆138Aug 19, 2024Updated last year