arcee-ai / DistillKit
An Open Source Toolkit For LLM Distillation
☆569Updated 3 months ago
Alternatives and similar repositories for DistillKit:
Users that are interested in DistillKit are comparing it to the libraries listed below
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆671Updated 3 weeks ago
- ☆1,014Updated 3 months ago
- ☆508Updated 4 months ago
- Automatic evals for LLMs☆359Updated this week
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,400Updated this week
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆712Updated 6 months ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆210Updated 5 months ago
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆468Updated 7 months ago
- Official repository for ORPO☆447Updated 10 months ago
- Code for Quiet-STaR☆729Updated 7 months ago
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆352Updated 7 months ago
- Implementation of paper Data Engineering for Scaling Language Models to 128K Context☆457Updated last year
- Recipes to scale inference-time compute of open models☆1,050Updated last month
- Manage scalable open LLM inference endpoints in Slurm clusters☆254Updated 9 months ago
- ☆916Updated 2 months ago
- OLMoE: Open Mixture-of-Experts Language Models☆704Updated 3 weeks ago
- Large Reasoning Models☆800Updated 4 months ago
- Best practices for distilling large language models.☆516Updated last year
- A compact LLM pretrained in 9 days by using high quality data☆308Updated this week
- awesome synthetic (text) datasets☆267Updated 5 months ago
- [NeurIPS'24 Spotlight, ICLR'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention, which r…☆965Updated this week
- Automatically evaluate your LLMs in Google Colab☆613Updated 11 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆313Updated 4 months ago
- Code for "LayerSkip: Enabling Early Exit Inference and Self-Speculative Decoding", ACL 2024☆284Updated last month
- ☆604Updated last week
- A pipeline for LLM knowledge distillation☆100Updated last week
- ☆524Updated 7 months ago
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆648Updated 10 months ago
- A bagel, with everything.☆318Updated last year
- Generative Representational Instruction Tuning☆617Updated 3 weeks ago