EvolKit is an innovative framework designed to automatically enhance the complexity of instructions used for fine-tuning Large Language Models (LLMs).
☆253Oct 30, 2024Updated last year
Alternatives and similar repositories for EvolKit
Users that are interested in EvolKit are comparing it to the libraries listed below. We may earn a commission when you buy through links labeled 'Ad' on this page.
Sorting:
- An Open Source Toolkit For LLM Distillation☆894Mar 14, 2026Updated last week
- Tools for merging pretrained large language models.☆6,895Mar 15, 2026Updated last week
- This is our own implementation of 'Layer Selective Rank Reduction'☆240May 26, 2024Updated last year
- Official code for "MAmmoTH2: Scaling Instructions from the Web" [NeurIPS 2024]☆149Oct 27, 2024Updated last year
- Automated Identification of Redundant Layer Blocks for Pruning in Large Language Models☆263Apr 23, 2024Updated last year
- [ICLR 2025] Alignment Data Synthesis from Scratch by Prompting Aligned LLMs with Nothing. Your efficient and high-quality synthetic data …☆833Mar 17, 2025Updated last year
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆3,131Mar 16, 2026Updated last week
- ☆138Aug 19, 2024Updated last year
- ☆137Mar 20, 2025Updated last year
- Low-Rank adapter extraction for fine-tuned transformers models☆181May 2, 2024Updated last year
- ☆567Nov 20, 2024Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆62Aug 30, 2024Updated last year
- Domain Adapted Language Modeling Toolkit - E2E RAG☆337Nov 8, 2024Updated last year
- ☆56Nov 6, 2024Updated last year
- Q-GaLore: Quantized GaLore with INT4 Projection and Layer-Adaptive Low-Rank Gradients.☆204Jul 17, 2024Updated last year
- Minimalistic large language model 3D-parallelism training☆2,617Feb 19, 2026Updated last month
- LLM-Training-API: Including Embeddings & ReRankers, mergekit, LaserRMT☆27Feb 18, 2024Updated 2 years ago
- Convert LLaMA3.1-8B to DeepSeek R1 MLA & MoE (raw)☆24Mar 10, 2025Updated last year
- [ICLR 2024] Sheared LLaMA: Accelerating Language Model Pre-training via Structured Pruning☆643Mar 4, 2024Updated 2 years ago
- Efficient encoder-decoder architecture for small language models (≤1B parameters) with cross-architecture knowledge distillation and visi…☆32Feb 7, 2025Updated last year
- Freeing data processing from scripting madness by providing a set of platform-agnostic customizable pipeline processing blocks.☆2,965Mar 16, 2026Updated last week
- JAX Scalify: end-to-end scaled arithmetics☆18Oct 30, 2024Updated last year
- An unsupervised model merging algorithm for Transformers-based language models.☆108Apr 29, 2024Updated last year
- The official evaluation suite and dynamic data release for MixEval.☆256Nov 10, 2024Updated last year
- Curated list of datasets and tools for post-training.☆4,344Mar 9, 2026Updated 2 weeks ago
- ☆143Aug 20, 2025Updated 7 months ago
- FuseAI Project☆592Jan 25, 2025Updated last year
- Transformers-compatible library for applying various compression algorithms to LLMs for optimized deployment with vLLM☆2,891Updated this week
- A simple example of VAEs with KANs☆12May 17, 2024Updated last year
- code for the table-based open domain question answering project, with paper title: "Reasoning over Hybrid Chain for Table-and-Text Open D…☆12Sep 16, 2022Updated 3 years ago
- ☆16Jul 23, 2024Updated last year
- Positional Skip-wise Training for Efficient Context Window Extension of LLMs to Extremely Length (ICLR 2024)☆209May 20, 2024Updated last year
- ☆327Jul 25, 2024Updated last year
- Atropos is a Language Model Reinforcement Learning Environments framework for collecting and evaluating LLM trajectories through diverse …☆914Updated this week
- Go ahead and axolotl questions☆11,460Mar 17, 2026Updated last week
- 🚢 Data Toolkit for Sailor Language Models☆96Feb 24, 2025Updated last year
- Create Custom LLMs☆1,820Nov 8, 2025Updated 4 months ago
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆145Sep 20, 2024Updated last year
- Codebase for Aria - an Open Multimodal Native MoE☆1,086Jan 22, 2025Updated last year