Aratako / Task-Vector-Merge-Optimzier
☆14Updated 11 months ago
Alternatives and similar repositories for Task-Vector-Merge-Optimzier:
Users that are interested in Task-Vector-Merge-Optimzier are comparing it to the libraries listed below
- Repository containing the SPIN experiments on the DIBT 10k ranked prompts☆24Updated last year
- Plug in & Play Pytorch Implementation of the paper: "Evolutionary Optimization of Model Merging Recipes" by Sakana AI☆30Updated 4 months ago
- ☆48Updated 4 months ago
- Unofficial Implementation of Evolutionary Model Merging☆35Updated 11 months ago
- Demonstration that finetuning RoPE model on larger sequences than the pre-trained model adapts the model context limit☆63Updated last year
- A repository for research on medium sized language models.☆76Updated 9 months ago
- ☆31Updated 9 months ago
- Script for processing OpenAI's PRM800K process supervision dataset into an Alpaca-style instruction-response format☆27Updated last year
- ☆27Updated last year
- ☆13Updated 3 months ago
- Reference implementation for Reward-Augmented Decoding: Efficient Controlled Text Generation With a Unidirectional Reward Model☆42Updated last year
- This repository contains code for cleaning your training data of benchmark data to help combat data snooping.☆25Updated last year
- Anchored Preference Optimization and Contrastive Revisions: Addressing Underspecification in Alignment☆55Updated 6 months ago
- entropix style sampling + GUI☆25Updated 4 months ago
- Code for the examples presented in the talk "Training a Llama in your backyard: fine-tuning very large models on consumer hardware" given…☆14Updated last year
- Checkpointable dataset utilities for foundation model training☆32Updated last year
- implementation of https://arxiv.org/pdf/2312.09299☆20Updated 8 months ago
- ☆49Updated last year
- ☆31Updated last year
- [WIP] Transformer to embed Danbooru labelsets☆13Updated 11 months ago
- Unofficial entropix impl for Gemma2 and Llama and Qwen2 and Mistral☆17Updated 2 months ago
- RWKV is a RNN with transformer-level LLM performance. It can be directly trained like a GPT (parallelizable). So it's combining the best …☆10Updated last year
- A public implementation of the ReLoRA pretraining method, built on Lightning-AI's Pytorch Lightning suite.☆33Updated last year
- ☆63Updated 5 months ago
- My Implementation of Q-Sparse: All Large Language Models can be Fully Sparsely-Activated☆31Updated 7 months ago
- Official repository for the paper "Approximating Two-Layer Feedforward Networks for Efficient Transformers"☆36Updated last year
- ☆22Updated last year