arcee-ai / DistillKitLinks
An Open Source Toolkit For LLM Distillation
☆651Updated 2 weeks ago
Alternatives and similar repositories for DistillKit
Users that are interested in DistillKit are comparing it to the libraries listed below
Sorting:
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆713Updated 3 months ago
- Lighteval is your all-in-one toolkit for evaluating LLMs across multiple backends☆1,629Updated this week
- Automatic evals for LLMs☆429Updated 2 weeks ago
- Memory optimization and training recipes to extrapolate language models' context length to 1 million tokens, with minimal hardware.☆731Updated 8 months ago
- EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language M…☆223Updated 7 months ago
- ☆938Updated 4 months ago
- ☆520Updated 7 months ago
- Recipes to scale inference-time compute of open models☆1,095Updated 3 weeks ago
- Official repository for ORPO☆455Updated last year
- Production ready LLM model compression/quantization toolkit with hw accelerated inference support for both cpu/gpu via HF, vLLM, and SGLa…☆625Updated this week
- [NeurIPS'24 Spotlight, ICLR'25, ICML'25] To speed up Long-context LLMs' inference, approximate and dynamic sparse calculate the attention…☆1,055Updated this week
- Large Reasoning Models☆804Updated 6 months ago
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆482Updated 9 months ago
- ☆773Updated last month
- LongRoPE is a novel method that can extends the context window of pre-trained LLMs to an impressive 2048k tokens.☆231Updated 9 months ago
- OLMoE: Open Mixture-of-Experts Language Models☆785Updated 3 months ago
- A family of compressed models obtained via pruning and knowledge distillation☆343Updated 7 months ago
- Chat Templates for 🤗 HuggingFace Large Language Models☆672Updated 6 months ago
- ☆1,025Updated 6 months ago
- [ICML'24] Data and code for our paper "Training-Free Long-Context Scaling of Large Language Models"☆410Updated 8 months ago
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆1,500Updated this week
- 🤗 Benchmark Large Language Models Reliably On Your Data☆329Updated this week
- Best practices for distilling large language models.☆553Updated last year
- A compact LLM pretrained in 9 days by using high quality data☆314Updated 2 months ago
- Memory layers use a trainable key-value lookup mechanism to add extra parameters to a model without increasing FLOPs. Conceptually, spars…☆337Updated 6 months ago
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆238Updated last year
- Tina: Tiny Reasoning Models via LoRA☆258Updated 3 weeks ago
- FuseAI Project☆573Updated 4 months ago
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆857Updated last week
- Scalable toolkit for efficient model alignment☆814Updated 3 weeks ago