arcee-ai / DistillKitLinks
An Open Source Toolkit For LLM Distillation
☆817Updated last week
Alternatives and similar repositories for DistillKit
Users that are interested in DistillKit are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆812Updated 9 months ago
- ☆559Updated last year
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆2,226Updated 2 weeks ago
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆500Updated last year
- Recipes to scale inference-time compute of open models☆1,122Updated 7 months ago
- Official repository for ORPO☆468Updated last year
- Automatic evals for LLMs☆569Updated last week
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆751Updated last year
- OLMoE: Open Mixture-of-Experts Language Models☆940Updated 3 months ago
- Best practices for distilling large language models.☆596Updated last year
- Automatically evaluate your LLMs in Google Colab☆678Updated last year
- A project to improve skills of large language models☆727Updated this week
- ☆1,035Updated last year
- Minimalistic large language model 3D-parallelism training☆2,396Updated 3 weeks ago
- ☆969Updated 11 months ago
- LLM model quantization (compression) toolkit with hw acceleration support for Nvidia CUDA, AMD ROCm, Intel XPU and Intel/AMD/Apple CPU vi…☆952Updated this week
- Chat Templates for 🤗 HuggingFace Large Language Models☆708Updated last year
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,170Updated 3 months ago
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆259Updated last year
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆245Updated last year
- A family of compressed models obtained via pruning and knowledge distillation☆361Updated last month
- Large Reasoning Models☆806Updated last year
- 🤗 Benchmark Large Language Models Reliably On Your Data☆420Updated this week
- ☆242Updated 3 months ago
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆366Updated last year
- FuseAI Project☆585Updated 11 months ago
- A compact LLM pretrained in 9 days by using high quality data☆337Updated 8 months ago
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆3,008Updated last week
- ☆1,050Updated 6 months ago
- PyTorch building blocks for the OLMo ecosystem☆634Updated this week