SakanaAI / CycleQDLinks
CycleQD is a framework for parameter space model merging.
☆44Updated 9 months ago
Alternatives and similar repositories for CycleQD
Users that are interested in CycleQD are comparing it to the libraries listed below
Sorting:
- Code for Discovering Preference Optimization Algorithms with and for Large Language Models☆63Updated last year
- Official implementation of "TAID: Temporally Adaptive Interpolated Distillation for Efficient Knowledge Transfer in Language Models"☆118Updated 3 weeks ago
- ☆22Updated 2 years ago
- ☆14Updated last year
- Checkpointable dataset utilities for foundation model training☆31Updated last year
- Mamba training library developed by kotoba technologies☆69Updated last year
- Train, tune, and infer Bamba model☆135Updated 4 months ago
- List of papers on Self-Correction of LLMs.☆80Updated 10 months ago
- The official repository of ALE-Bench☆121Updated this week
- Swallowプロジェクト 事後学習済み大規模言語モデル 評価フレームワーク☆21Updated 2 weeks ago
- Ongoing Research Project for continaual pre-training LLM(dense mode)☆42Updated 8 months ago
- Example of using Epochraft to train HuggingFace transformers models with PyTorch FSDP☆11Updated last year
- An AI benchmark for creative, human-like problem solving using Sudoku variants☆105Updated 3 months ago
- Lottery Ticket Adaptation☆40Updated 11 months ago
- A repository for research on medium sized language models.☆78Updated last year
- [ICLR 2025] SDTT: a simple and effective distillation method for discrete diffusion models☆41Updated last month
- Memory Mosaics are networks of associative memories working in concert to achieve a prediction task.☆48Updated 9 months ago
- ☆12Updated 7 months ago
- ☆20Updated last year
- ☆27Updated last year
- Kaggle AIMO2 solution with token-efficient reasoning LLM recipes☆37Updated 2 months ago
- Bayes-Adaptive RL for LLM Reasoning☆40Updated 5 months ago
- Plug in & Play Pytorch Implementation of the paper: "Evolutionary Optimization of Model Merging Recipes" by Sakana AI☆29Updated 11 months ago
- Efficiently discovering algorithms via LLMs with evolutionary search and reinforcement learning.☆116Updated last week
- ☆61Updated last year
- Support Continual pre-training & Instruction Tuning forked from llama-recipes☆33Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆84Updated last year
- Code for the "Cultural evolution in populations of Large Language Models" paper☆31Updated last year
- Ongoing research training Mixture of Expert models.☆21Updated last year