fangyuan-ksgk / Evolutionary-Model-MergeLinks
Unofficial Implementation of Evolutionary Model Merging
☆39Updated last year
Alternatives and similar repositories for Evolutionary-Model-Merge
Users that are interested in Evolutionary-Model-Merge are comparing it to the libraries listed below
Sorting:
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆146Updated 10 months ago
- Plug in & Play Pytorch Implementation of the paper: "Evolutionary Optimization of Model Merging Recipes" by Sakana AI☆30Updated 8 months ago
- ☆83Updated 6 months ago
- A repository for research on medium sized language models.☆78Updated last year
- This is the official repository for Inheritune.☆112Updated 5 months ago
- From GaLore to WeLore: How Low-Rank Weights Non-uniformly Emerge from Low-Rank Gradients. Ajay Jaiswal, Lu Yin, Zhenyu Zhang, Shiwei Liu,…☆47Updated 3 months ago
- The official implementation of Self-Exploring Language Models (SELM)☆64Updated last year
- Repo for "Z1: Efficient Test-time Scaling with Code"☆63Updated 3 months ago
- Pytorch implementation for "Compressed Context Memory For Online Language Model Interaction" (ICLR'24)☆61Updated last year
- ☆19Updated 7 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆98Updated 10 months ago
- [ACL 2025] An inference-time decoding strategy with adaptive foresight sampling☆101Updated 2 months ago
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆34Updated 4 months ago
- Minimal implementation of the Self-Play Fine-Tuning Converts Weak Language Models to Strong Language Models paper (ArXiv 20232401.01335)☆28Updated last year
- Code for "Critique Fine-Tuning: Learning to Critique is More Effective than Learning to Imitate" [COLM 2025]☆169Updated 3 weeks ago
- [NeurIPS 2024] Low rank memory efficient optimizer without SVD☆30Updated last month
- ☆101Updated 10 months ago
- Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch☆178Updated last month
- ☆68Updated last year
- Challenge LLMs to Reason About Reasoning: A Benchmark to Unveil Cognitive Depth in LLMs☆50Updated last year
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆160Updated 3 months ago
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Models☆116Updated last year
- ☆81Updated last week
- X-LoRA: Mixture of LoRA Experts☆232Updated last year
- ☆83Updated 11 months ago
- Official repository of "LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging"☆30Updated 9 months ago
- FuseAI Project☆87Updated 6 months ago
- ☆118Updated 5 months ago
- The official repo for "LLoCo: Learning Long Contexts Offline"☆118Updated last year
- [NeurIPS-2024] 📈 Scaling Laws with Vocabulary: Larger Models Deserve Larger Vocabularies https://arxiv.org/abs/2407.13623☆86Updated 10 months ago