SakanaAI / evolutionary-model-mergeLinks
Official repository of Evolutionary Optimization of Model Merging Recipes
☆1,382Updated 11 months ago
Alternatives and similar repositories for evolutionary-model-merge
Users that are interested in evolutionary-model-merge are comparing it to the libraries listed below
Sorting:
- The official implementation of Self-Play Fine-Tuning (SPIN)☆1,217Updated last year
- Implementation of the training framework proposed in Self-Rewarding Language Model, from MetaAI☆1,403Updated last year
- GaLore: Memory-Efficient LLM Training by Gradient Low-Rank Projection☆1,623Updated last year
- Reaching LLaMA2 Performance with 0.1M Dollars☆987Updated last year
- Implementation of "BitNet: Scaling 1-bit Transformers for Large Language Models" in pytorch☆1,895Updated 3 weeks ago
- Code for Quiet-STaR☆741Updated last year
- Codebase for Merging Language Models (ICML 2024)☆859Updated last year
- [ICLR 2025] Samba: Simple Hybrid State Space Models for Efficient Unlimited Context Language Modeling☆927Updated this week
- Xwin-LM: Powerful, Stable, and Reproducible LLM Alignment☆1,042Updated last year
- Stanford NLP Python library for Representation Finetuning (ReFT)☆1,535Updated 9 months ago
- Training LLMs with QLoRA + FSDP☆1,529Updated last year
- ☆1,035Updated 11 months ago
- A family of open-sourced Mixture-of-Experts (MoE) Large Language Models☆1,635Updated last year
- ToRA is a series of Tool-integrated Reasoning LLM Agents designed to solve challenging mathematical reasoning problems by interacting wit…☆1,104Updated last year
- A library with extensible implementations of DPO, KTO, PPO, ORPO, and other human-aware loss functions (HALOs).☆893Updated last month
- ☆2,551Updated last year
- MLE-bench is a benchmark for measuring how well AI agents perform at machine learning engineering☆1,178Updated last week
- YaRN: Efficient Context Window Extension of Large Language Models☆1,637Updated last year
- [NeurIPS 2023] MeZO: Fine-Tuning Language Models with Just Forward Passes. https://arxiv.org/abs/2305.17333☆1,134Updated last year
- [NeurIPS 2024] SimPO: Simple Preference Optimization with a Reference-Free Reward☆928Updated 9 months ago
- ☆446Updated last year
- Mamba-Chat: A chat LLM based on the state-space model architecture 🐍☆935Updated last year
- Official repository for ORPO☆465Updated last year
- A Self-adaptation Framework🐙 that adapts LLMs for unseen tasks in real-time!☆1,166Updated 9 months ago
- ☆957Updated last year
- Recipes to scale inference-time compute of open models☆1,118Updated 6 months ago
- [ICML'24 Spotlight] LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning☆660Updated last year
- Official code for ReLoRA from the paper Stack More Layers Differently: High-Rank Training Through Low-Rank Updates☆468Updated last year
- PyTorch implementation of Infini-Transformer from "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention…☆292Updated last year
- ☆1,009Updated 9 months ago