snimu / rebasinLinks
Apply methods described in "Git Re-basin"-paper [1] to arbitrary models --- [1] Ainsworth et al. (https://arxiv.org/abs/2209.04836)
☆14Updated last week
Alternatives and similar repositories for rebasin
Users that are interested in rebasin are comparing it to the libraries listed below
Sorting:
- Git Re-Basin: Merging Models modulo Permutation Symmetries in PyTorch☆75Updated 2 years ago
- Replicating and dissecting the git-re-basin project in one-click-replication Colabs☆36Updated 2 years ago
- What do we learn from inverting CLIP models?☆54Updated last year
- ☆67Updated 3 years ago
- ☆45Updated 2 years ago
- A modern look at the relationship between sharpness and generalization [ICML 2023]☆43Updated last year
- Code release for REPAIR: REnormalizing Permuted Activations for Interpolation Repair☆47Updated last year
- ☆30Updated 10 months ago
- Code for the paper "The Journey, Not the Destination: How Data Guides Diffusion Models"☆22Updated last year
- Data for "Datamodels: Predicting Predictions with Training Data"☆97Updated 2 years ago
- Finetune Google's pre-trained ViT models from HuggingFace's model hub.☆21Updated 4 years ago
- Personal implementation of ASIF by Antonio Norelli☆25Updated last year
- Recycling diverse models☆44Updated 2 years ago
- DiWA: Diverse Weight Averaging for Out-of-Distribution Generalization☆31Updated 2 years ago
- Latest Weight Averaging (NeurIPS HITY 2022)☆30Updated last year
- ☆28Updated 2 years ago
- ☆34Updated last week
- Revisiting Efficient Training Algorithms For Transformer-based Language Models (NeurIPS 2023)☆80Updated last year
- Do input gradients highlight discriminative features? [NeurIPS 2021] (https://arxiv.org/abs/2102.12781)☆13Updated 2 years ago
- Source code of "Task arithmetic in the tangent space: Improved editing of pre-trained models".☆102Updated last year
- Model Fusion via Optimal Transport, NeurIPS 2020☆145Updated 2 years ago
- [NeurIPS 2024] Goldfish Loss: Mitigating Memorization in Generative LLMs☆87Updated 6 months ago
- Stanford NLP Python library for benchmarking the utility of LLM interpretability methods☆89Updated last week
- MetaShift: A Dataset of Datasets for Evaluating Contextual Distribution Shifts and Training Conflicts (ICLR 2022)☆109Updated 2 years ago
- ☆95Updated 2 years ago
- Sharpness-Aware Minimization Leads to Low-Rank Features [NeurIPS 2023]☆28Updated last year
- ☆23Updated 2 years ago
- Codes for the paper "Optimizing Mode Connectivity via Neuron Alignment" from NeurIPS 2020.☆16Updated 4 years ago
- ☆14Updated last year
- A simple and efficient baseline for data attribution☆11Updated last year