fangyuan-ksgk / Evolutionary-Model-MergeLinks
Unofficial Implementation of Evolutionary Model Merging
☆41Updated last year
Alternatives and similar repositories for Evolutionary-Model-Merge
Users that are interested in Evolutionary-Model-Merge are comparing it to the libraries listed below
Sorting:
- Plug in & Play Pytorch Implementation of the paper: "Evolutionary Optimization of Model Merging Recipes" by Sakana AI☆31Updated last year
- Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks (EMNLP'24)☆147Updated last year
- ☆85Updated 2 months ago
- [TMLR 2026] When Attention Collapses: How Degenerate Layers in LLMs Enable Smaller, Stronger Models☆121Updated 11 months ago
- Official repository for the paper "SwitchHead: Accelerating Transformers with Mixture-of-Experts Attention"☆102Updated last year
- Pytorch implementation for "Compressed Context Memory For Online Language Model Interaction" (ICLR'24)☆62Updated last year
- ☆112Updated last year
- Implementation of 🥥 Coconut, Chain of Continuous Thought, in Pytorch☆182Updated 7 months ago
- ☆91Updated last year
- A repository for research on medium sized language models.☆77Updated last year
- [NeurIPS 2024] Low rank memory efficient optimizer without SVD☆33Updated 7 months ago
- ☆71Updated last year
- ☆19Updated last year
- The official implementation of Self-Exploring Language Models (SELM)☆63Updated last year
- Challenge LLMs to Reason About Reasoning: A Benchmark to Unveil Cognitive Depth in LLMs☆51Updated last year
- Block Transformer: Global-to-Local Language Modeling for Fast Inference (NeurIPS 2024)☆163Updated 9 months ago
- [ICML 2025] From Low Rank Gradient Subspace Stabilization to Low-Rank Weights: Observations, Theories and Applications☆52Updated 3 months ago
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆35Updated 10 months ago
- [ACL 2025] An inference-time decoding strategy with adaptive foresight sampling☆107Updated 8 months ago
- Pytorch implementation of the PEER block from the paper, Mixture of A Million Experts, by Xu Owen He at Deepmind☆134Updated 3 months ago
- The official code repo and data hub of top_nsigma sampling strategy for LLMs.☆26Updated 11 months ago
- Official implementation of the ICML 2024 paper RoSA (Robust Adaptation)☆44Updated last year
- [EMNLP'25 Industry] Repo for "Z1: Efficient Test-time Scaling with Code"☆68Updated 9 months ago
- Official repository of "LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging"☆31Updated last year
- X-LoRA: Mixture of LoRA Experts☆261Updated last year
- ☆71Updated last year
- ☆123Updated 11 months ago
- [NAACL 2025] A Closer Look into Mixture-of-Experts in Large Language Models☆58Updated 11 months ago
- DELLA-Merging: Reducing Interference in Model Merging through Magnitude-Based Sampling☆36Updated last year
- Co-LLM: Learning to Decode Collaboratively with Multiple Language Models☆126Updated last year