arcee-ai / DistillKitLinks
An Open Source Toolkit For LLM Distillation
☆698Updated 3 weeks ago
Alternatives and similar repositories for DistillKit
Users that are interested in DistillKit are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆736Updated 4 months ago
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆489Updated 11 months ago
- Official repository for ORPO☆460Updated last year
- ☆525Updated 8 months ago
- Automatic evals for LLMs☆488Updated last month
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,766Updated this week
- Recipes to scale inference-time compute of open models☆1,110Updated 2 months ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆739Updated 10 months ago
- OLMoE: Open Mixture-of-Experts Language Models☆823Updated 4 months ago
- A project to improve skills of large language models☆490Updated this week
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆229Updated 9 months ago
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,080Updated this week
- A family of compressed models obtained via pruning and knowledge distillation☆347Updated 8 months ago
- ☆1,028Updated 7 months ago
- Production ready LLM model compression/quantization toolkit with hw accelerated inference support for both cpu/gpu via HF, vLLM, and SGLa…☆702Updated 2 weeks ago
- A compact LLM pretrained in 9 days by using high quality data☆320Updated 3 months ago
- [ACL'24] Selective Reflection-Tuning: Student-Selected Data Recycling for LLM Instruction-Tuning☆359Updated 10 months ago
- Code for Quiet-STaR☆735Updated 11 months ago
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆659Updated last year
- Best practices for distilling large language models.☆568Updated last year
- ☆953Updated 6 months ago
- Chat Templates for 🤗 HuggingFace Large Language Models☆688Updated 7 months ago
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆1,690Updated last week
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆243Updated last year
- Large Reasoning Models☆804Updated 7 months ago
- FuseAI Project☆578Updated 6 months ago
- 🤗 Benchmark Large Language Models Reliably On Your Data☆367Updated this week
- Minimalistic large language model 3D-parallelism training☆2,068Updated 3 weeks ago
- Automatically evaluate your LLMs in Google Colab☆649Updated last year
- Generative Representational Instruction Tuning☆662Updated last month