arcee-ai / DistillKitLinks
An Open Source Toolkit For LLM Distillation
☆859Updated last month
Alternatives and similar repositories for DistillKit
Users that are interested in DistillKit are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆826Updated 10 months ago
- Automatic evals for LLMs☆579Updated last month
- ☆564Updated last year
- Recipes to scale inference-time compute of open models☆1,124Updated 8 months ago
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆2,291Updated 2 weeks ago
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆505Updated last year
- OLMoE: Open Mixture-of-Experts Language Models☆965Updated 4 months ago
- Official repository for ORPO☆469Updated last year
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆751Updated last year
- 🤗 Benchmark Large Language Models Reliably On Your Data☆426Updated last month
- ☆970Updated last year
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆371Updated last year
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆250Updated last year
- Best practices for distilling large language models.☆604Updated 2 years ago
- ☆1,033Updated last year
- Chat Templates for 🤗 HuggingFace Large Language Models☆713Updated last year
- Stanford NLP Python library for Representation Finetuning (ReFT)☆1,555Updated 3 weeks ago
- Generative Representational Instruction Tuning☆685Updated 7 months ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,180Updated 4 months ago
- [ICLR 2026] Tina: Tiny Reasoning Models via LoRA☆319Updated 4 months ago
- A family of compressed models obtained via pruning and knowledge distillation☆364Updated 3 months ago
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆366Updated last year
- Train Models Contrastively in Pytorch☆774Updated 10 months ago
- Code for Quiet-STaR☆740Updated last year
- A project to improve skills of large language models☆813Updated this week
- Minimalistic large language model 3D-parallelism training☆2,529Updated last month
- ☆696Updated 9 months ago
- Automatically evaluate your LLMs in Google Colab☆685Updated last year
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆261Updated last year
- Arena-Hard-Auto: An automatic LLM benchmark.☆994Updated 7 months ago