SakanaAI / CycleQDLinks
CycleQD is a framework for parameter space model merging.
☆43Updated 7 months ago
Alternatives and similar repositories for CycleQD
Users that are interested in CycleQD are comparing it to the libraries listed below
Sorting:
- Code for Discovering Preference Optimization Algorithms with and for Large Language Models☆63Updated last year
- Official implementation of "TAID: Temporally Adaptive Interpolated Distillation for Efficient Knowledge Transfer in Language Models"☆115Updated 7 months ago
- Checkpointable dataset utilities for foundation model training☆32Updated last year
- An AI benchmark for creative, human-like problem solving using Sudoku variants☆91Updated last month
- A repository for research on medium sized language models.☆78Updated last year
- Mamba training library developed by kotoba technologies☆71Updated last year
- ☆22Updated last year
- Train, tune, and infer Bamba model☆131Updated 2 months ago
- Lottery Ticket Adaptation☆39Updated 9 months ago
- Multi-Agent Verification: Scaling Test-Time Compute with Multiple Verifiers☆20Updated 6 months ago
- Ongoing Research Project for continaual pre-training LLM(dense mode)☆42Updated 5 months ago
- ☆14Updated last year
- The official repository of ALE-Bench☆112Updated this week
- Plug in & Play Pytorch Implementation of the paper: "Evolutionary Optimization of Model Merging Recipes" by Sakana AI☆31Updated 9 months ago
- [ICLR 2025] SDTT: a simple and effective distillation method for discrete diffusion models☆34Updated 5 months ago
- Memory Mosaics are networks of associative memories working in concert to achieve a prediction task.☆48Updated 7 months ago
- List of papers on Self-Correction of LLMs.☆74Updated 8 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated last year
- Bayes-Adaptive RL for LLM Reasoning☆37Updated 3 months ago
- ☆23Updated last year
- Official Code Repository for EnvGen: Generating and Adapting Environments via LLMs for Training Embodied Agents (COLM 2024)☆35Updated last year
- Efficiently discovering algorithms via LLMs with evolutionary search and reinforcement learning.☆106Updated 3 weeks ago
- Code for the "Cultural evolution in populations of Large Language Models" paper☆32Updated 10 months ago
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆82Updated 10 months ago
- ☆58Updated 3 months ago
- ☆46Updated last year
- ☆19Updated last year
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆14Updated last year
- Support Continual pre-training & Instruction Tuning forked from llama-recipes☆32Updated last year
- Q-Probe: A Lightweight Approach to Reward Maximization for Language Models☆41Updated last year