SakanaAI / CycleQD
CycleQD is a framework for parameter space model merging.
☆39Updated 3 months ago
Alternatives and similar repositories for CycleQD:
Users that are interested in CycleQD are comparing it to the libraries listed below
- Code for Discovering Preference Optimization Algorithms with and for Large Language Models☆61Updated 10 months ago
- [ICLR 2025] SDTT: a simple and effective distillation method for discrete diffusion models☆24Updated last month
- An AI benchmark for creative, human-like problem solving using Sudoku variants☆42Updated 2 weeks ago
- Official implementation of "TAID: Temporally Adaptive Interpolated Distillation for Efficient Knowledge Transfer in Language Models"☆103Updated 3 months ago
- Mamba training library developed by kotoba technologies☆69Updated last year
- Checkpointable dataset utilities for foundation model training☆32Updated last year
- Multi-Agent Verification: Scaling Test-Time Compute with Multiple Verifiers☆17Updated 2 months ago
- Example of using Epochraft to train HuggingFace transformers models with PyTorch FSDP☆11Updated last year
- Memory Mosaics are networks of associative memories working in concert to achieve a prediction task.☆41Updated 3 months ago
- A repository for research on medium sized language models.☆76Updated 11 months ago
- List of papers on Self-Correction of LLMs.☆72Updated 4 months ago
- a benchmark to evaluate the situated inductive reasoning☆15Updated 4 months ago
- Plug in & Play Pytorch Implementation of the paper: "Evolutionary Optimization of Model Merging Recipes" by Sakana AI☆30Updated 5 months ago
- Train, tune, and infer Bamba model☆115Updated last week
- Implementation of Mind Evolution, Evolving Deeper LLM Thinking, from Deepmind☆49Updated 3 months ago
- ☆22Updated last year
- ☆14Updated last year
- Lottery Ticket Adaptation☆39Updated 5 months ago
- Ongoing Research Project for continaual pre-training LLM(dense mode)☆40Updated 2 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆72Updated 6 months ago
- This repo is based on https://github.com/jiaweizzhao/GaLore☆27Updated 7 months ago
- ☆78Updated 8 months ago
- ☆16Updated 8 months ago
- Ongoing research training Mixture of Expert models.☆19Updated 7 months ago
- ☆54Updated 8 months ago
- Official repo for BOOKWORLD: From Novels to Interactive Agent Societies for Story Creation☆30Updated last week
- Repository for the Q-Filters method (https://arxiv.org/pdf/2503.02812)☆30Updated 2 months ago
- ☆10Updated last month
- ☆92Updated 7 months ago
- Official repository of "LiNeS: Post-training Layer Scaling Prevents Forgetting and Enhances Model Merging"☆26Updated 6 months ago