Official repository of Evolutionary Optimization of Model Merging Recipes
☆1,403Nov 29, 2024Updated last year
Alternatives and similar repositories for evolutionary-model-merge
Users that are interested in evolutionary-model-merge are comparing it to the libraries listed below
Sorting:
- Tools for merging pretrained large language models.☆6,826Updated this week
- Codebase for Merging Language Models (ICML 2024)☆863May 5, 2024Updated last year
- Plug in & Play Pytorch Implementation of the paper: "Evolutionary Optimization of Model Merging Recipes" by Sakana AI☆31Nov 11, 2024Updated last year
- Reaching LLaMA2 Performance with 0.1M Dollars☆988Jul 23, 2024Updated last year
- Unofficial Implementation of Evolutionary Model Merging☆41Mar 28, 2024Updated last year
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,678Oct 28, 2024Updated last year
- The AI Scientist: Towards Fully Automated Open-Ended Scientific Discovery 🧑🔬☆12,216Dec 19, 2025Updated 2 months ago
- Code for Discovering Preference Optimization Algorithms with and for Large Language Models☆192Jun 13, 2024Updated last year
- Robust recipes to align language models with human and AI preferences☆5,510Sep 8, 2025Updated 5 months ago
- Training LLMs with QLoRA + FSDP☆1,537Nov 9, 2024Updated last year
- A Self-adaptation Framework🐙 that adapts LLMs for unseen tasks in real-time!☆1,189Jan 30, 2025Updated last year
- FuseAI Project☆590Jan 25, 2025Updated last year
- PyTorch native post-training library☆5,691Updated this week
- PyTorch compiler that accelerates training and inference. Get built-in optimizations for performance, memory, parallelism, and easily wri…☆1,444Feb 25, 2026Updated last week
- A framework for few-shot evaluation of language models.☆11,540Updated this week
- 【TMM 2025🔥】 Mixture-of-Experts for Large Vision-Language Models☆2,303Jul 15, 2025Updated 7 months ago
- Go ahead and axolotl questions☆11,335Feb 26, 2026Updated last week
- Medusa: Simple Framework for Accelerating LLM Generation with Multiple Decoding Heads☆2,710Jun 25, 2024Updated last year
- Official Pytorch repository for Extreme Compression of Large Language Models via Additive Quantization https://arxiv.org/pdf/2401.06118.p…☆1,315Feb 26, 2026Updated last week
- Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" in pytorch☆1,896Feb 6, 2026Updated 3 weeks ago
- Distilabel is a framework for synthetic data and AI feedback for engineers who need fast, reliable and scalable pipelines based on verifi…☆3,108Feb 23, 2026Updated last week
- Train transformer language models with reinforcement learning.☆17,460Feb 26, 2026Updated last week
- Stanford NLP Python library for Representation Finetuning (ReFT)☆1,560Jan 14, 2026Updated last month
- Large World Model -- Modeling Text and Video with Millions Context☆7,399Oct 19, 2024Updated last year
- A library for easily merging multiple LLM experts, and efficiently train the merged LLM.☆507Aug 26, 2024Updated last year
- LLM-Merging: Building LLMs Efficiently through Merging☆209Sep 24, 2024Updated last year
- Official repository for ORPO☆471May 31, 2024Updated last year
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,235May 8, 2024Updated last year
- Schedule-Free Optimization in PyTorch☆2,257May 21, 2025Updated 9 months ago
- PyTorch code and models for V-JEPA self-supervised learning from video.☆3,566Feb 27, 2025Updated last year
- Minimalistic large language model 3D-parallelism training☆2,579Feb 19, 2026Updated 2 weeks ago
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆952Nov 16, 2025Updated 3 months ago
- [ICLR 2024] Efficient Streaming Language Models with Attention Sinks☆7,187Jul 11, 2024Updated last year
- Mamba SSM architecture☆17,257Feb 18, 2026Updated 2 weeks ago
- Code to train and evaluate Neural Attention Memory Models to obtain universally-applicable memory systems for transformers.☆349Oct 22, 2024Updated last year
- Modeling, training, eval, and inference code for OLMo☆6,326Nov 24, 2025Updated 3 months ago
- Accelerate your Hugging Face Transformers 7.6-9x. Native to Hugging Face and PyTorch.☆685Aug 22, 2024Updated last year
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,663Mar 8, 2024Updated last year
- QLoRA: Efficient Finetuning of Quantized LLMs☆10,843Jun 10, 2024Updated last year