SakanaAI / CycleQDLinks
CycleQD is a framework for parameter space model merging.
☆44Updated 8 months ago
Alternatives and similar repositories for CycleQD
Users that are interested in CycleQD are comparing it to the libraries listed below
Sorting:
- Code for Discovering Preference Optimization Algorithms with and for Large Language Models☆63Updated last year
- Checkpointable dataset utilities for foundation model training☆31Updated last year
- ☆14Updated last year
- Official implementation of "TAID: Temporally Adaptive Interpolated Distillation for Efficient Knowledge Transfer in Language Models"☆114Updated this week
- A repository for research on medium sized language models.☆78Updated last year
- Mamba training library developed by kotoba technologies☆70Updated last year
- List of papers on Self-Correction of LLMs.☆78Updated 9 months ago
- ☆20Updated last year
- Memory Mosaics are networks of associative memories working in concert to achieve a prediction task.☆48Updated 8 months ago
- Lottery Ticket Adaptation☆40Updated 10 months ago
- ☆22Updated 2 years ago
- Train, tune, and infer Bamba model☆133Updated 4 months ago
- Ongoing Research Project for continaual pre-training LLM(dense mode)☆42Updated 7 months ago
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆60Updated last year
- Plug in & Play Pytorch Implementation of the paper: "Evolutionary Optimization of Model Merging Recipes" by Sakana AI☆29Updated 11 months ago
- ☆12Updated 6 months ago
- Example of using Epochraft to train HuggingFace transformers models with PyTorch FSDP☆11Updated last year
- Official Code Repository for EnvGen: Generating and Adapting Environments via LLMs for Training Embodied Agents (COLM 2024)☆37Updated last year
- Code for NeurIPS 2024 Spotlight: "Scaling Laws and Compute-Optimal Training Beyond Fixed Training Durations"☆84Updated 11 months ago
- ☆85Updated last year
- An AI benchmark for creative, human-like problem solving using Sudoku variants☆102Updated 2 months ago
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆14Updated last year
- Official repository of paper "RNNs Are Not Transformers (Yet): The Key Bottleneck on In-context Retrieval"☆27Updated last year
- Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Fl…☆75Updated last year
- The official repository of ALE-Bench☆116Updated this week
- Fluid Language Model Benchmarking☆17Updated 3 weeks ago
- Minimum Description Length probing for neural network representations☆20Updated 8 months ago
- Triton Implementation of HyperAttention Algorithm☆48Updated last year
- Implementation of the paper: "Leave No Context Behind: Efficient Infinite Context Transformers with Infini-attention" from Google in pyTO…☆56Updated this week
- ☆57Updated last week